On 10/10/2017 07:26 PM, Ingo Molnar wrote:
>
> * Peter Zijlstra wrote:
>
>> On Tue, Oct 10, 2017 at 03:51:37PM +0100, Matt Fleming wrote:
>>> On Fri, 06 Oct, at 11:36:23AM, Matt Fleming wrote:
It's a similar story for hackbench-threads-{pipes,sockets}, i.e.
On 10/10/2017 07:26 PM, Ingo Molnar wrote:
>
> * Peter Zijlstra wrote:
>
>> On Tue, Oct 10, 2017 at 03:51:37PM +0100, Matt Fleming wrote:
>>> On Fri, 06 Oct, at 11:36:23AM, Matt Fleming wrote:
It's a similar story for hackbench-threads-{pipes,sockets}, i.e. pipes
regress but
* Peter Zijlstra wrote:
> On Tue, Oct 10, 2017 at 03:51:37PM +0100, Matt Fleming wrote:
> > On Fri, 06 Oct, at 11:36:23AM, Matt Fleming wrote:
> > >
> > > It's a similar story for hackbench-threads-{pipes,sockets}, i.e. pipes
> > > regress but performance is restored for
* Peter Zijlstra wrote:
> On Tue, Oct 10, 2017 at 03:51:37PM +0100, Matt Fleming wrote:
> > On Fri, 06 Oct, at 11:36:23AM, Matt Fleming wrote:
> > >
> > > It's a similar story for hackbench-threads-{pipes,sockets}, i.e. pipes
> > > regress but performance is restored for sockets.
> > >
> > >
On Tue, Oct 10, 2017 at 03:51:37PM +0100, Matt Fleming wrote:
> On Fri, 06 Oct, at 11:36:23AM, Matt Fleming wrote:
> >
> > It's a similar story for hackbench-threads-{pipes,sockets}, i.e. pipes
> > regress but performance is restored for sockets.
> >
> > Of course, like a dope, I forgot to
On Tue, Oct 10, 2017 at 03:51:37PM +0100, Matt Fleming wrote:
> On Fri, 06 Oct, at 11:36:23AM, Matt Fleming wrote:
> >
> > It's a similar story for hackbench-threads-{pipes,sockets}, i.e. pipes
> > regress but performance is restored for sockets.
> >
> > Of course, like a dope, I forgot to
On Fri, 06 Oct, at 11:36:23AM, Matt Fleming wrote:
>
> It's a similar story for hackbench-threads-{pipes,sockets}, i.e. pipes
> regress but performance is restored for sockets.
>
> Of course, like a dope, I forgot to re-run netperf with your WA_WEIGHT
> patch. So I've queued that up now and it
On Fri, 06 Oct, at 11:36:23AM, Matt Fleming wrote:
>
> It's a similar story for hackbench-threads-{pipes,sockets}, i.e. pipes
> regress but performance is restored for sockets.
>
> Of course, like a dope, I forgot to re-run netperf with your WA_WEIGHT
> patch. So I've queued that up now and it
On Wed, 04 Oct, at 06:18:50PM, Peter Zijlstra wrote:
> On Tue, Oct 03, 2017 at 10:39:32AM +0200, Peter Zijlstra wrote:
> > So I was waiting for Rik, who promised to run a bunch of NUMA workloads
> > over the weekend.
> >
> > The trivial thing regresses a wee bit on the overloaded case, I've not
>
On Wed, 04 Oct, at 06:18:50PM, Peter Zijlstra wrote:
> On Tue, Oct 03, 2017 at 10:39:32AM +0200, Peter Zijlstra wrote:
> > So I was waiting for Rik, who promised to run a bunch of NUMA workloads
> > over the weekend.
> >
> > The trivial thing regresses a wee bit on the overloaded case, I've not
>
On Wed, 2017-10-04 at 18:18 +0200, Peter Zijlstra wrote:
> On Tue, Oct 03, 2017 at 10:39:32AM +0200, Peter Zijlstra wrote:
> > So I was waiting for Rik, who promised to run a bunch of NUMA
> > workloads
> > over the weekend.
> >
> > The trivial thing regresses a wee bit on the overloaded case,
On Wed, 2017-10-04 at 18:18 +0200, Peter Zijlstra wrote:
> On Tue, Oct 03, 2017 at 10:39:32AM +0200, Peter Zijlstra wrote:
> > So I was waiting for Rik, who promised to run a bunch of NUMA
> > workloads
> > over the weekend.
> >
> > The trivial thing regresses a wee bit on the overloaded case,
On Tue, Oct 03, 2017 at 10:39:32AM +0200, Peter Zijlstra wrote:
> So I was waiting for Rik, who promised to run a bunch of NUMA workloads
> over the weekend.
>
> The trivial thing regresses a wee bit on the overloaded case, I've not
> yet tried to fix it.
WA_IDLE is my 'old' patch and what you
On Tue, Oct 03, 2017 at 10:39:32AM +0200, Peter Zijlstra wrote:
> So I was waiting for Rik, who promised to run a bunch of NUMA workloads
> over the weekend.
>
> The trivial thing regresses a wee bit on the overloaded case, I've not
> yet tried to fix it.
WA_IDLE is my 'old' patch and what you
On Tue, 2017-10-03 at 10:39 +0200, Peter Zijlstra wrote:
> On Mon, Oct 02, 2017 at 11:53:12PM +0100, Matt Fleming wrote:
> > On Wed, 27 Sep, at 01:58:20PM, Rik van Riel wrote:
> > >
> > > I like the simplicity of your approach! I hope it does not break
> > > stuff like netperf...
> > >
> > > I
On Tue, 2017-10-03 at 10:39 +0200, Peter Zijlstra wrote:
> On Mon, Oct 02, 2017 at 11:53:12PM +0100, Matt Fleming wrote:
> > On Wed, 27 Sep, at 01:58:20PM, Rik van Riel wrote:
> > >
> > > I like the simplicity of your approach! I hope it does not break
> > > stuff like netperf...
> > >
> > > I
On Mon, Oct 02, 2017 at 11:53:12PM +0100, Matt Fleming wrote:
> On Wed, 27 Sep, at 01:58:20PM, Rik van Riel wrote:
> >
> > I like the simplicity of your approach! I hope it does not break
> > stuff like netperf...
> >
> > I have been working on the patch below, which is much less optimistic
> >
On Mon, Oct 02, 2017 at 11:53:12PM +0100, Matt Fleming wrote:
> On Wed, 27 Sep, at 01:58:20PM, Rik van Riel wrote:
> >
> > I like the simplicity of your approach! I hope it does not break
> > stuff like netperf...
> >
> > I have been working on the patch below, which is much less optimistic
> >
On Wed, 27 Sep, at 01:58:20PM, Rik van Riel wrote:
>
> I like the simplicity of your approach! I hope it does not break
> stuff like netperf...
>
> I have been working on the patch below, which is much less optimistic
> about when to do an affine wakeup than before.
Running netperf for this
On Wed, 27 Sep, at 01:58:20PM, Rik van Riel wrote:
>
> I like the simplicity of your approach! I hope it does not break
> stuff like netperf...
>
> I have been working on the patch below, which is much less optimistic
> about when to do an affine wakeup than before.
Running netperf for this
On Wed, Sep 27, 2017 at 01:58:20PM -0400, Rik van Riel wrote:
> @@ -5359,10 +5378,14 @@ wake_affine_llc(struct sched_domain *sd, struct
> task_struct *p,
> unsigned long current_load = task_h_load(current);
>
> /* in this case load hits 0 and this LLC is considered
On Wed, Sep 27, 2017 at 01:58:20PM -0400, Rik van Riel wrote:
> @@ -5359,10 +5378,14 @@ wake_affine_llc(struct sched_domain *sd, struct
> task_struct *p,
> unsigned long current_load = task_h_load(current);
>
> /* in this case load hits 0 and this LLC is considered
On Wed, Sep 27, 2017 at 01:58:20PM -0400, Rik van Riel wrote:
> I like the simplicity of your approach! I hope it does not break
> stuff like netperf...
So the old approach that looks at the weight of the two CPUs behaves
slightly better in the overloaded case. On the threads==nr_cpus load
On Wed, Sep 27, 2017 at 01:58:20PM -0400, Rik van Riel wrote:
> I like the simplicity of your approach! I hope it does not break
> stuff like netperf...
So the old approach that looks at the weight of the two CPUs behaves
slightly better in the overloaded case. On the threads==nr_cpus load
On 09/27/2017 01:58 PM, Rik van Riel wrote:
On Wed, 27 Sep 2017 11:35:30 +0200
Peter Zijlstra wrote:
On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote:
MySQL. We've tried a few different configs with both test=oltp and
test=threads, but both show the same
On 09/27/2017 01:58 PM, Rik van Riel wrote:
On Wed, 27 Sep 2017 11:35:30 +0200
Peter Zijlstra wrote:
On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote:
MySQL. We've tried a few different configs with both test=oltp and
test=threads, but both show the same behavior. What I have
On 09/27/2017 06:27 PM, Eric Farman wrote:
>
>
> On 09/27/2017 05:35 AM, Peter Zijlstra wrote:
>> On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote:
>>>
>>> MySQL. We've tried a few different configs with both test=oltp and
>>> test=threads, but both show the same behavior. What I
On 09/27/2017 06:27 PM, Eric Farman wrote:
>
>
> On 09/27/2017 05:35 AM, Peter Zijlstra wrote:
>> On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote:
>>>
>>> MySQL. We've tried a few different configs with both test=oltp and
>>> test=threads, but both show the same behavior. What I
On Wed, 27 Sep 2017 11:35:30 +0200
Peter Zijlstra wrote:
> On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote:
> >
> > MySQL. We've tried a few different configs with both test=oltp and
> > test=threads, but both show the same behavior. What I have settled on
On Wed, 27 Sep 2017 11:35:30 +0200
Peter Zijlstra wrote:
> On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote:
> >
> > MySQL. We've tried a few different configs with both test=oltp and
> > test=threads, but both show the same behavior. What I have settled on for
> > my repro is the
On 09/27/2017 05:35 AM, Peter Zijlstra wrote:
On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote:
MySQL. We've tried a few different configs with both test=oltp and
test=threads, but both show the same behavior. What I have settled on for
my repro is the following:
Right,
On 09/27/2017 05:35 AM, Peter Zijlstra wrote:
On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote:
MySQL. We've tried a few different configs with both test=oltp and
test=threads, but both show the same behavior. What I have settled on for
my repro is the following:
Right,
On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote:
>
> MySQL. We've tried a few different configs with both test=oltp and
> test=threads, but both show the same behavior. What I have settled on for
> my repro is the following:
>
Right, didn't even need to run it in a guest to
On Fri, Sep 22, 2017 at 12:12:45PM -0400, Eric Farman wrote:
>
> MySQL. We've tried a few different configs with both test=oltp and
> test=threads, but both show the same behavior. What I have settled on for
> my repro is the following:
>
Right, didn't even need to run it in a guest to
On 09/22/2017 11:53 AM, Peter Zijlstra wrote:
On Fri, Sep 22, 2017 at 11:03:39AM -0400, Eric Farman wrote:
Hi Peter, Rik,
With OSS last week, I'm sure this got lost in the deluge, so here's a
friendly ping.
Very much so, inbox is a giant trainwreck ;-)
My apologies. :)
I picked up
On 09/22/2017 11:53 AM, Peter Zijlstra wrote:
On Fri, Sep 22, 2017 at 11:03:39AM -0400, Eric Farman wrote:
Hi Peter, Rik,
With OSS last week, I'm sure this got lost in the deluge, so here's a
friendly ping.
Very much so, inbox is a giant trainwreck ;-)
My apologies. :)
I picked up
On Fri, Sep 22, 2017 at 11:03:39AM -0400, Eric Farman wrote:
> Hi Peter, Rik,
>
> With OSS last week, I'm sure this got lost in the deluge, so here's a
> friendly ping.
Very much so, inbox is a giant trainwreck ;-)
> I picked up 4.14.0-rc1 earlier this week, and still see the
> degradation
On Fri, Sep 22, 2017 at 11:03:39AM -0400, Eric Farman wrote:
> Hi Peter, Rik,
>
> With OSS last week, I'm sure this got lost in the deluge, so here's a
> friendly ping.
Very much so, inbox is a giant trainwreck ;-)
> I picked up 4.14.0-rc1 earlier this week, and still see the
> degradation
On 09/13/2017 04:24 AM, 王金浦 wrote:
2017-09-12 16:14 GMT+02:00 Eric Farman :
Hi Peter, Rik,
Running sysbench measurements in a 16CPU/30GB KVM guest on a 20CPU/40GB
s390x host, we noticed a throughput degradation (anywhere between 13% and
40%, depending on test) when
On 09/13/2017 04:24 AM, 王金浦 wrote:
2017-09-12 16:14 GMT+02:00 Eric Farman :
Hi Peter, Rik,
Running sysbench measurements in a 16CPU/30GB KVM guest on a 20CPU/40GB
s390x host, we noticed a throughput degradation (anywhere between 13% and
40%, depending on test) when moving the host from
2017-09-12 16:14 GMT+02:00 Eric Farman :
> Hi Peter, Rik,
>
> Running sysbench measurements in a 16CPU/30GB KVM guest on a 20CPU/40GB
> s390x host, we noticed a throughput degradation (anywhere between 13% and
> 40%, depending on test) when moving the host from kernel
2017-09-12 16:14 GMT+02:00 Eric Farman :
> Hi Peter, Rik,
>
> Running sysbench measurements in a 16CPU/30GB KVM guest on a 20CPU/40GB
> s390x host, we noticed a throughput degradation (anywhere between 13% and
> 40%, depending on test) when moving the host from kernel 4.12 to 4.13. The
> rest of
Hi Peter, Rik,
Running sysbench measurements in a 16CPU/30GB KVM guest on a 20CPU/40GB
s390x host, we noticed a throughput degradation (anywhere between 13%
and 40%, depending on test) when moving the host from kernel 4.12 to
4.13. The rest of the host and the entire guest remain unchanged;
Hi Peter, Rik,
Running sysbench measurements in a 16CPU/30GB KVM guest on a 20CPU/40GB
s390x host, we noticed a throughput degradation (anywhere between 13%
and 40%, depending on test) when moving the host from kernel 4.12 to
4.13. The rest of the host and the entire guest remain unchanged;
44 matches
Mail list logo