Re: sysbench throughput degradation in 4.13+

2017-10-10 Thread Christian Borntraeger
On 10/10/2017 07:26 PM, Ingo Molnar wrote: > > * Peter Zijlstra wrote: > >> On Tue, Oct 10, 2017 at 03:51:37PM +0100, Matt Fleming wrote: >>> On Fri, 06 Oct, at 11:36:23AM, Matt Fleming wrote: It's a similar story for hackbench-threads-{pipes,sockets}, i.e.

Re: sysbench throughput degradation in 4.13+

2017-10-10 Thread Christian Borntraeger
On 10/10/2017 07:26 PM, Ingo Molnar wrote: > > * Peter Zijlstra wrote: > >> On Tue, Oct 10, 2017 at 03:51:37PM +0100, Matt Fleming wrote: >>> On Fri, 06 Oct, at 11:36:23AM, Matt Fleming wrote: It's a similar story for hackbench-threads-{pipes,sockets}, i.e. pipes regress but

Re: sysbench throughput degradation in 4.13+

2017-10-10 Thread Ingo Molnar
* Peter Zijlstra wrote: > On Tue, Oct 10, 2017 at 03:51:37PM +0100, Matt Fleming wrote: > > On Fri, 06 Oct, at 11:36:23AM, Matt Fleming wrote: > > > > > > It's a similar story for hackbench-threads-{pipes,sockets}, i.e. pipes > > > regress but performance is restored for

Re: sysbench throughput degradation in 4.13+

2017-10-10 Thread Ingo Molnar
* Peter Zijlstra wrote: > On Tue, Oct 10, 2017 at 03:51:37PM +0100, Matt Fleming wrote: > > On Fri, 06 Oct, at 11:36:23AM, Matt Fleming wrote: > > > > > > It's a similar story for hackbench-threads-{pipes,sockets}, i.e. pipes > > > regress but performance is restored for sockets. > > > > > >

Re: sysbench throughput degradation in 4.13+

2017-10-10 Thread Peter Zijlstra
On Tue, Oct 10, 2017 at 03:51:37PM +0100, Matt Fleming wrote: > On Fri, 06 Oct, at 11:36:23AM, Matt Fleming wrote: > > > > It's a similar story for hackbench-threads-{pipes,sockets}, i.e. pipes > > regress but performance is restored for sockets. > > > > Of course, like a dope, I forgot to

Re: sysbench throughput degradation in 4.13+

2017-10-10 Thread Peter Zijlstra
On Tue, Oct 10, 2017 at 03:51:37PM +0100, Matt Fleming wrote: > On Fri, 06 Oct, at 11:36:23AM, Matt Fleming wrote: > > > > It's a similar story for hackbench-threads-{pipes,sockets}, i.e. pipes > > regress but performance is restored for sockets. > > > > Of course, like a dope, I forgot to

Re: sysbench throughput degradation in 4.13+

2017-10-10 Thread Matt Fleming
On Fri, 06 Oct, at 11:36:23AM, Matt Fleming wrote: > > It's a similar story for hackbench-threads-{pipes,sockets}, i.e. pipes > regress but performance is restored for sockets. > > Of course, like a dope, I forgot to re-run netperf with your WA_WEIGHT > patch. So I've queued that up now and it

Re: sysbench throughput degradation in 4.13+

2017-10-10 Thread Matt Fleming
On Fri, 06 Oct, at 11:36:23AM, Matt Fleming wrote: > > It's a similar story for hackbench-threads-{pipes,sockets}, i.e. pipes > regress but performance is restored for sockets. > > Of course, like a dope, I forgot to re-run netperf with your WA_WEIGHT > patch. So I've queued that up now and it

Re: sysbench throughput degradation in 4.13+

2017-10-06 Thread Matt Fleming
On Wed, 04 Oct, at 06:18:50PM, Peter Zijlstra wrote: > On Tue, Oct 03, 2017 at 10:39:32AM +0200, Peter Zijlstra wrote: > > So I was waiting for Rik, who promised to run a bunch of NUMA workloads > > over the weekend. > > > > The trivial thing regresses a wee bit on the overloaded case, I've not >

Re: sysbench throughput degradation in 4.13+

2017-10-06 Thread Matt Fleming
On Wed, 04 Oct, at 06:18:50PM, Peter Zijlstra wrote: > On Tue, Oct 03, 2017 at 10:39:32AM +0200, Peter Zijlstra wrote: > > So I was waiting for Rik, who promised to run a bunch of NUMA workloads > > over the weekend. > > > > The trivial thing regresses a wee bit on the overloaded case, I've not >

Re: sysbench throughput degradation in 4.13+

2017-10-04 Thread Rik van Riel
On Wed, 2017-10-04 at 18:18 +0200, Peter Zijlstra wrote: > On Tue, Oct 03, 2017 at 10:39:32AM +0200, Peter Zijlstra wrote: > > So I was waiting for Rik, who promised to run a bunch of NUMA > > workloads > > over the weekend. > > > > The trivial thing regresses a wee bit on the overloaded case,

Re: sysbench throughput degradation in 4.13+

2017-10-04 Thread Rik van Riel
On Wed, 2017-10-04 at 18:18 +0200, Peter Zijlstra wrote: > On Tue, Oct 03, 2017 at 10:39:32AM +0200, Peter Zijlstra wrote: > > So I was waiting for Rik, who promised to run a bunch of NUMA > > workloads > > over the weekend. > > > > The trivial thing regresses a wee bit on the overloaded case,

Re: sysbench throughput degradation in 4.13+

2017-10-04 Thread Peter Zijlstra
On Tue, Oct 03, 2017 at 10:39:32AM +0200, Peter Zijlstra wrote: > So I was waiting for Rik, who promised to run a bunch of NUMA workloads > over the weekend. > > The trivial thing regresses a wee bit on the overloaded case, I've not > yet tried to fix it. WA_IDLE is my 'old' patch and what you

Re: sysbench throughput degradation in 4.13+

2017-10-04 Thread Peter Zijlstra
On Tue, Oct 03, 2017 at 10:39:32AM +0200, Peter Zijlstra wrote: > So I was waiting for Rik, who promised to run a bunch of NUMA workloads > over the weekend. > > The trivial thing regresses a wee bit on the overloaded case, I've not > yet tried to fix it. WA_IDLE is my 'old' patch and what you

Re: sysbench throughput degradation in 4.13+

2017-10-03 Thread Rik van Riel
On Tue, 2017-10-03 at 10:39 +0200, Peter Zijlstra wrote: > On Mon, Oct 02, 2017 at 11:53:12PM +0100, Matt Fleming wrote: > > On Wed, 27 Sep, at 01:58:20PM, Rik van Riel wrote: > > > > > > I like the simplicity of your approach!  I hope it does not break > > > stuff like netperf... > > > > > > I

Re: sysbench throughput degradation in 4.13+

2017-10-03 Thread Rik van Riel
On Tue, 2017-10-03 at 10:39 +0200, Peter Zijlstra wrote: > On Mon, Oct 02, 2017 at 11:53:12PM +0100, Matt Fleming wrote: > > On Wed, 27 Sep, at 01:58:20PM, Rik van Riel wrote: > > > > > > I like the simplicity of your approach!  I hope it does not break > > > stuff like netperf... > > > > > > I

Re: sysbench throughput degradation in 4.13+

2017-10-03 Thread Peter Zijlstra
On Mon, Oct 02, 2017 at 11:53:12PM +0100, Matt Fleming wrote: > On Wed, 27 Sep, at 01:58:20PM, Rik van Riel wrote: > > > > I like the simplicity of your approach! I hope it does not break > > stuff like netperf... > > > > I have been working on the patch below, which is much less optimistic > >

Re: sysbench throughput degradation in 4.13+

2017-10-03 Thread Peter Zijlstra
On Mon, Oct 02, 2017 at 11:53:12PM +0100, Matt Fleming wrote: > On Wed, 27 Sep, at 01:58:20PM, Rik van Riel wrote: > > > > I like the simplicity of your approach! I hope it does not break > > stuff like netperf... > > > > I have been working on the patch below, which is much less optimistic > >

Re: sysbench throughput degradation in 4.13+

2017-10-02 Thread Matt Fleming
On Wed, 27 Sep, at 01:58:20PM, Rik van Riel wrote: > > I like the simplicity of your approach! I hope it does not break > stuff like netperf... > > I have been working on the patch below, which is much less optimistic > about when to do an affine wakeup than before. Running netperf for this

Re: sysbench throughput degradation in 4.13+

2017-10-02 Thread Matt Fleming
On Wed, 27 Sep, at 01:58:20PM, Rik van Riel wrote: > > I like the simplicity of your approach! I hope it does not break > stuff like netperf... > > I have been working on the patch below, which is much less optimistic > about when to do an affine wakeup than before. Running netperf for this

Re: sysbench throughput degradation in 4.13+

2017-09-28 Thread Peter Zijlstra
On Wed, Sep 27, 2017 at 01:58:20PM -0400, Rik van Riel wrote: > @@ -5359,10 +5378,14 @@ wake_affine_llc(struct sched_domain *sd, struct > task_struct *p, > unsigned long current_load = task_h_load(current); > > /* in this case load hits 0 and this LLC is considered

Re: sysbench throughput degradation in 4.13+

2017-09-28 Thread Peter Zijlstra
On Wed, Sep 27, 2017 at 01:58:20PM -0400, Rik van Riel wrote: > @@ -5359,10 +5378,14 @@ wake_affine_llc(struct sched_domain *sd, struct > task_struct *p, > unsigned long current_load = task_h_load(current); > > /* in this case load hits 0 and this LLC is considered

Re: sysbench throughput degradation in 4.13+

2017-09-28 Thread Peter Zijlstra
On Wed, Sep 27, 2017 at 01:58:20PM -0400, Rik van Riel wrote: > I like the simplicity of your approach! I hope it does not break > stuff like netperf... So the old approach that looks at the weight of the two CPUs behaves slightly better in the overloaded case. On the threads==nr_cpus load

Re: sysbench throughput degradation in 4.13+

2017-09-28 Thread Peter Zijlstra
On Wed, Sep 27, 2017 at 01:58:20PM -0400, Rik van Riel wrote: > I like the simplicity of your approach! I hope it does not break > stuff like netperf... So the old approach that looks at the weight of the two CPUs behaves slightly better in the overloaded case. On the threads==nr_cpus load

Re: sysbench throughput degradation in 4.13+

2017-09-28 Thread Eric Farman
On 09/27/2017 01:58 PM, Rik van Riel wrote: On Wed, 27 Sep 2017 11:35:30 +0200 Peter Zijlstra wrote: On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote: MySQL. We've tried a few different configs with both test=oltp and test=threads, but both show the same

Re: sysbench throughput degradation in 4.13+

2017-09-28 Thread Eric Farman
On 09/27/2017 01:58 PM, Rik van Riel wrote: On Wed, 27 Sep 2017 11:35:30 +0200 Peter Zijlstra wrote: On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote: MySQL. We've tried a few different configs with both test=oltp and test=threads, but both show the same behavior. What I have

Re: sysbench throughput degradation in 4.13+

2017-09-28 Thread Christian Borntraeger
On 09/27/2017 06:27 PM, Eric Farman wrote: > > > On 09/27/2017 05:35 AM, Peter Zijlstra wrote: >> On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote: >>> >>> MySQL. We've tried a few different configs with both test=oltp and >>> test=threads, but both show the same behavior. What I

Re: sysbench throughput degradation in 4.13+

2017-09-28 Thread Christian Borntraeger
On 09/27/2017 06:27 PM, Eric Farman wrote: > > > On 09/27/2017 05:35 AM, Peter Zijlstra wrote: >> On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote: >>> >>> MySQL. We've tried a few different configs with both test=oltp and >>> test=threads, but both show the same behavior. What I

Re: sysbench throughput degradation in 4.13+

2017-09-27 Thread Rik van Riel
On Wed, 27 Sep 2017 11:35:30 +0200 Peter Zijlstra wrote: > On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote: > > > > MySQL. We've tried a few different configs with both test=oltp and > > test=threads, but both show the same behavior. What I have settled on

Re: sysbench throughput degradation in 4.13+

2017-09-27 Thread Rik van Riel
On Wed, 27 Sep 2017 11:35:30 +0200 Peter Zijlstra wrote: > On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote: > > > > MySQL. We've tried a few different configs with both test=oltp and > > test=threads, but both show the same behavior. What I have settled on for > > my repro is the

Re: sysbench throughput degradation in 4.13+

2017-09-27 Thread Eric Farman
On 09/27/2017 05:35 AM, Peter Zijlstra wrote: On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote: MySQL. We've tried a few different configs with both test=oltp and test=threads, but both show the same behavior. What I have settled on for my repro is the following: Right,

Re: sysbench throughput degradation in 4.13+

2017-09-27 Thread Eric Farman
On 09/27/2017 05:35 AM, Peter Zijlstra wrote: On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote: MySQL. We've tried a few different configs with both test=oltp and test=threads, but both show the same behavior. What I have settled on for my repro is the following: Right,

Re: sysbench throughput degradation in 4.13+

2017-09-27 Thread Peter Zijlstra
On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote: > > MySQL. We've tried a few different configs with both test=oltp and > test=threads, but both show the same behavior. What I have settled on for > my repro is the following: > Right, didn't even need to run it in a guest to

Re: sysbench throughput degradation in 4.13+

2017-09-27 Thread Peter Zijlstra
On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote: > > MySQL. We've tried a few different configs with both test=oltp and > test=threads, but both show the same behavior. What I have settled on for > my repro is the following: > Right, didn't even need to run it in a guest to

Re: sysbench throughput degradation in 4.13+

2017-09-22 Thread Eric Farman
On 09/22/2017 11:53 AM, Peter Zijlstra wrote: On Fri, Sep 22, 2017 at 11:03:39AM -0400, Eric Farman wrote: Hi Peter, Rik, With OSS last week, I'm sure this got lost in the deluge, so here's a friendly ping. Very much so, inbox is a giant trainwreck ;-) My apologies. :) I picked up

Re: sysbench throughput degradation in 4.13+

2017-09-22 Thread Eric Farman
On 09/22/2017 11:53 AM, Peter Zijlstra wrote: On Fri, Sep 22, 2017 at 11:03:39AM -0400, Eric Farman wrote: Hi Peter, Rik, With OSS last week, I'm sure this got lost in the deluge, so here's a friendly ping. Very much so, inbox is a giant trainwreck ;-) My apologies. :) I picked up

Re: sysbench throughput degradation in 4.13+

2017-09-22 Thread Peter Zijlstra
On Fri, Sep 22, 2017 at 11:03:39AM -0400, Eric Farman wrote: > Hi Peter, Rik, > > With OSS last week, I'm sure this got lost in the deluge, so here's a > friendly ping. Very much so, inbox is a giant trainwreck ;-) > I picked up 4.14.0-rc1 earlier this week, and still see the > degradation

Re: sysbench throughput degradation in 4.13+

2017-09-22 Thread Peter Zijlstra
On Fri, Sep 22, 2017 at 11:03:39AM -0400, Eric Farman wrote: > Hi Peter, Rik, > > With OSS last week, I'm sure this got lost in the deluge, so here's a > friendly ping. Very much so, inbox is a giant trainwreck ;-) > I picked up 4.14.0-rc1 earlier this week, and still see the > degradation

Re: sysbench throughput degradation in 4.13+

2017-09-22 Thread Eric Farman
On 09/13/2017 04:24 AM, 王金浦 wrote: 2017-09-12 16:14 GMT+02:00 Eric Farman : Hi Peter, Rik, Running sysbench measurements in a 16CPU/30GB KVM guest on a 20CPU/40GB s390x host, we noticed a throughput degradation (anywhere between 13% and 40%, depending on test) when

Re: sysbench throughput degradation in 4.13+

2017-09-22 Thread Eric Farman
On 09/13/2017 04:24 AM, 王金浦 wrote: 2017-09-12 16:14 GMT+02:00 Eric Farman : Hi Peter, Rik, Running sysbench measurements in a 16CPU/30GB KVM guest on a 20CPU/40GB s390x host, we noticed a throughput degradation (anywhere between 13% and 40%, depending on test) when moving the host from

Re: sysbench throughput degradation in 4.13+

2017-09-13 Thread 王金浦
2017-09-12 16:14 GMT+02:00 Eric Farman : > Hi Peter, Rik, > > Running sysbench measurements in a 16CPU/30GB KVM guest on a 20CPU/40GB > s390x host, we noticed a throughput degradation (anywhere between 13% and > 40%, depending on test) when moving the host from kernel

Re: sysbench throughput degradation in 4.13+

2017-09-13 Thread 王金浦
2017-09-12 16:14 GMT+02:00 Eric Farman : > Hi Peter, Rik, > > Running sysbench measurements in a 16CPU/30GB KVM guest on a 20CPU/40GB > s390x host, we noticed a throughput degradation (anywhere between 13% and > 40%, depending on test) when moving the host from kernel 4.12 to 4.13. The > rest of

sysbench throughput degradation in 4.13+

2017-09-12 Thread Eric Farman
Hi Peter, Rik, Running sysbench measurements in a 16CPU/30GB KVM guest on a 20CPU/40GB s390x host, we noticed a throughput degradation (anywhere between 13% and 40%, depending on test) when moving the host from kernel 4.12 to 4.13. The rest of the host and the entire guest remain unchanged;

sysbench throughput degradation in 4.13+

2017-09-12 Thread Eric Farman
Hi Peter, Rik, Running sysbench measurements in a 16CPU/30GB KVM guest on a 20CPU/40GB s390x host, we noticed a throughput degradation (anywhere between 13% and 40%, depending on test) when moving the host from kernel 4.12 to 4.13. The rest of the host and the entire guest remain unchanged;