On Wed, May 30, 2018 at 03:08:21PM -0700, Subhra Mazumdar wrote:
> I tested with FOLD+AGE+ONCE+PONIES+PONIES2 shift=0 vs baseline but see some
> regression for hackbench and uperf:
I'm not seeing a hackbench regression myself, but I let it run a whole
lot of stuff over-night and I do indeed see
On Wed, May 30, 2018 at 03:08:21PM -0700, Subhra Mazumdar wrote:
> I tested with FOLD+AGE+ONCE+PONIES+PONIES2 shift=0 vs baseline but see some
> regression for hackbench and uperf:
I'm not seeing a hackbench regression myself, but I let it run a whole
lot of stuff over-night and I do indeed see
On 05/29/2018 02:36 PM, Peter Zijlstra wrote:
On Wed, May 02, 2018 at 02:58:42PM -0700, Subhra Mazumdar wrote:
I re-ran the test after fixing that bug but still get similar regressions
for hackbench
Hackbench process on 2 socket, 44 core and 88 threads Intel x86 machine
(lower is better):
On 05/29/2018 02:36 PM, Peter Zijlstra wrote:
On Wed, May 02, 2018 at 02:58:42PM -0700, Subhra Mazumdar wrote:
I re-ran the test after fixing that bug but still get similar regressions
for hackbench
Hackbench process on 2 socket, 44 core and 88 threads Intel x86 machine
(lower is better):
On Wed, May 02, 2018 at 02:58:42PM -0700, Subhra Mazumdar wrote:
> I re-ran the test after fixing that bug but still get similar regressions
> for hackbench
> Hackbench process on 2 socket, 44 core and 88 threads Intel x86 machine
> (lower is better):
> groups baseline %stdev patch %stdev
On Wed, May 02, 2018 at 02:58:42PM -0700, Subhra Mazumdar wrote:
> I re-ran the test after fixing that bug but still get similar regressions
> for hackbench
> Hackbench process on 2 socket, 44 core and 88 threads Intel x86 machine
> (lower is better):
> groups baseline %stdev patch %stdev
On 05/02/2018 02:58 PM, Subhra Mazumdar wrote:
On 05/01/2018 11:03 AM, Peter Zijlstra wrote:
On Mon, Apr 30, 2018 at 04:38:42PM -0700, Subhra Mazumdar wrote:
I also noticed a possible bug later in the merge code. Shouldn't it be:
if (busy < best_busy) {
best_busy = busy;
On 05/02/2018 02:58 PM, Subhra Mazumdar wrote:
On 05/01/2018 11:03 AM, Peter Zijlstra wrote:
On Mon, Apr 30, 2018 at 04:38:42PM -0700, Subhra Mazumdar wrote:
I also noticed a possible bug later in the merge code. Shouldn't it be:
if (busy < best_busy) {
best_busy = busy;
On 05/01/2018 11:03 AM, Peter Zijlstra wrote:
On Mon, Apr 30, 2018 at 04:38:42PM -0700, Subhra Mazumdar wrote:
I also noticed a possible bug later in the merge code. Shouldn't it be:
if (busy < best_busy) {
best_busy = busy;
best_cpu = first_idle;
}
Uhh, quite. I did say
On 05/01/2018 11:03 AM, Peter Zijlstra wrote:
On Mon, Apr 30, 2018 at 04:38:42PM -0700, Subhra Mazumdar wrote:
I also noticed a possible bug later in the merge code. Shouldn't it be:
if (busy < best_busy) {
best_busy = busy;
best_cpu = first_idle;
}
Uhh, quite. I did say
On Mon, Apr 30, 2018 at 04:38:42PM -0700, Subhra Mazumdar wrote:
> I also noticed a possible bug later in the merge code. Shouldn't it be:
>
> if (busy < best_busy) {
> best_busy = busy;
> best_cpu = first_idle;
> }
Uhh, quite. I did say it was completely untested, but yes.. /me
On Mon, Apr 30, 2018 at 04:38:42PM -0700, Subhra Mazumdar wrote:
> I also noticed a possible bug later in the merge code. Shouldn't it be:
>
> if (busy < best_busy) {
> best_busy = busy;
> best_cpu = first_idle;
> }
Uhh, quite. I did say it was completely untested, but yes.. /me
On 04/25/2018 10:49 AM, Peter Zijlstra wrote:
On Tue, Apr 24, 2018 at 02:45:50PM -0700, Subhra Mazumdar wrote:
So what you said makes sense in theory but is not borne out by real
world results. This indicates that threads of these benchmarks care more
about running immediately on any idle cpu
On 04/25/2018 10:49 AM, Peter Zijlstra wrote:
On Tue, Apr 24, 2018 at 02:45:50PM -0700, Subhra Mazumdar wrote:
So what you said makes sense in theory but is not borne out by real
world results. This indicates that threads of these benchmarks care more
about running immediately on any idle cpu
On Tue, Apr 24, 2018 at 02:45:50PM -0700, Subhra Mazumdar wrote:
> So what you said makes sense in theory but is not borne out by real
> world results. This indicates that threads of these benchmarks care more
> about running immediately on any idle cpu rather than spending time to find
> fully
On Tue, Apr 24, 2018 at 02:45:50PM -0700, Subhra Mazumdar wrote:
> So what you said makes sense in theory but is not borne out by real
> world results. This indicates that threads of these benchmarks care more
> about running immediately on any idle cpu rather than spending time to find
> fully
On 04/24/2018 05:46 AM, Peter Zijlstra wrote:
On Mon, Apr 23, 2018 at 05:41:14PM -0700, subhra mazumdar wrote:
select_idle_core() can potentially search all cpus to find the fully idle
core even if there is one such core. Removing this is necessary to achieve
scalability in the fast path.
So
On 04/24/2018 05:46 AM, Peter Zijlstra wrote:
On Mon, Apr 23, 2018 at 05:41:14PM -0700, subhra mazumdar wrote:
select_idle_core() can potentially search all cpus to find the fully idle
core even if there is one such core. Removing this is necessary to achieve
scalability in the fast path.
So
On Mon, Apr 23, 2018 at 05:41:14PM -0700, subhra mazumdar wrote:
> select_idle_core() can potentially search all cpus to find the fully idle
> core even if there is one such core. Removing this is necessary to achieve
> scalability in the fast path.
So this removes the whole core awareness from
On Mon, Apr 23, 2018 at 05:41:14PM -0700, subhra mazumdar wrote:
> select_idle_core() can potentially search all cpus to find the fully idle
> core even if there is one such core. Removing this is necessary to achieve
> scalability in the fast path.
So this removes the whole core awareness from
select_idle_core() can potentially search all cpus to find the fully idle
core even if there is one such core. Removing this is necessary to achieve
scalability in the fast path.
Signed-off-by: subhra mazumdar
---
include/linux/sched/topology.h | 1 -
select_idle_core() can potentially search all cpus to find the fully idle
core even if there is one such core. Removing this is necessary to achieve
scalability in the fast path.
Signed-off-by: subhra mazumdar
---
include/linux/sched/topology.h | 1 -
kernel/sched/fair.c| 97
22 matches
Mail list logo