Re: [lkp] [ipc/sem.c] 5864a2fd30: aim9.shared_memory.ops_per_sec -13.0%

2016-10-19 Thread Manfred Spraul

On 10/20/2016 02:21 AM, Andrew Morton wrote:

On Wed, 19 Oct 2016 06:38:14 +0200 Manfred Spraul  
wrote:


Hi,

as discussed before:
The root cause for the performance regression is the smp_mb() that was
added into the fast path.

I see two options:
1) switch to full spin_lock()/spin_unlock() for the rare codepath,
   then the fast path doesn't need the smp_mb() anymore.

2) confirm that no arch needs the smp_mb(), then remove it.
   - powerpc is ok after commit
  6262db7c088b ("powerpc/spinlock: Fix spin_unlock_wait()")
   - arm is ok after commit
  d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent 
lockers")
   - for x86 is ok after commit
  2c6100227116 ("locking/qspinlock: Fix spin_unlock_wait() some more")
   - for the remaining SMP architectures, I don't have a status.

I would prefer the approach 1:
The memory ordering provided by spin_lock()/spin_unlock() is clear.

Thus:
Attached are patches for approach 1:

- Patch 1 replaces spin_unlock_wait() with spin_lock()/spin_unlock() and
   removes all memory barriers that are then unnecessary.

- Patch 2 adds the hysteresis code: This makes the rare codepath
   extremely rare.
   It also corrects some wrong comments, e.g. regarding switching
   from global lock to per-sem lock (we "must' switch, not we "can"
   switch as written right now).

The patches passed stress-testing.

What do you think?

Are you able to confirm that the performance issues are fixed?

I don't know why we are now at -13%.
The previous -9% were resolved by the patches:

http://marc.info/?l=linux-kernel&m=147608082017551&w=2

My initial idea was to aim for 4.10, then we have more time to decide.

I suppose I can slip these into -next and see what the effect is upon
the Intel test results.  But a) I don't know if they test linux-next(?)
and b) I don't know where the test results are published, assuming they
are published(?).


--

Manfred



Re: [lkp] [ipc/sem.c] 5864a2fd30: aim9.shared_memory.ops_per_sec -13.0%

2016-10-19 Thread Andrew Morton
On Wed, 19 Oct 2016 06:38:14 +0200 Manfred Spraul  
wrote:

> Hi,
> 
> as discussed before:
> The root cause for the performance regression is the smp_mb() that was
> added into the fast path.
> 
> I see two options:
> 1) switch to full spin_lock()/spin_unlock() for the rare codepath,
>   then the fast path doesn't need the smp_mb() anymore.
> 
> 2) confirm that no arch needs the smp_mb(), then remove it.
>   - powerpc is ok after commit
>  6262db7c088b ("powerpc/spinlock: Fix spin_unlock_wait()")
>   - arm is ok after commit
>  d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against 
> concurrent lockers")
>   - for x86 is ok after commit
>  2c6100227116 ("locking/qspinlock: Fix spin_unlock_wait() some more")
>   - for the remaining SMP architectures, I don't have a status.
> 
> I would prefer the approach 1:
> The memory ordering provided by spin_lock()/spin_unlock() is clear.
> 
> Thus:
> Attached are patches for approach 1:
> 
> - Patch 1 replaces spin_unlock_wait() with spin_lock()/spin_unlock() and
>   removes all memory barriers that are then unnecessary.
> 
> - Patch 2 adds the hysteresis code: This makes the rare codepath
>   extremely rare.
>   It also corrects some wrong comments, e.g. regarding switching
>   from global lock to per-sem lock (we "must' switch, not we "can"
>   switch as written right now).
> 
> The patches passed stress-testing.
> 
> What do you think?

Are you able to confirm that the performance issues are fixed?

> My initial idea was to aim for 4.10, then we have more time to decide.

I suppose I can slip these into -next and see what the effect is upon
the Intel test results.  But a) I don't know if they test linux-next(?)
and b) I don't know where the test results are published, assuming they
are published(?).



Re: [lkp] [ipc/sem.c] 5864a2fd30: aim9.shared_memory.ops_per_sec -13.0%

2016-10-18 Thread Manfred Spraul
Hi,

as discussed before:
The root cause for the performance regression is the smp_mb() that was
added into the fast path.

I see two options:
1) switch to full spin_lock()/spin_unlock() for the rare codepath,
  then the fast path doesn't need the smp_mb() anymore.

2) confirm that no arch needs the smp_mb(), then remove it.
  - powerpc is ok after commit
 6262db7c088b ("powerpc/spinlock: Fix spin_unlock_wait()")
  - arm is ok after commit
 d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against 
concurrent lockers")
  - for x86 is ok after commit
 2c6100227116 ("locking/qspinlock: Fix spin_unlock_wait() some more")
  - for the remaining SMP architectures, I don't have a status.

I would prefer the approach 1:
The memory ordering provided by spin_lock()/spin_unlock() is clear.

Thus:
Attached are patches for approach 1:

- Patch 1 replaces spin_unlock_wait() with spin_lock()/spin_unlock() and
  removes all memory barriers that are then unnecessary.

- Patch 2 adds the hysteresis code: This makes the rare codepath
  extremely rare.
  It also corrects some wrong comments, e.g. regarding switching
  from global lock to per-sem lock (we "must' switch, not we "can"
  switch as written right now).

The patches passed stress-testing.

What do you think?
My initial idea was to aim for 4.10, then we have more time to decide.

--
Manfred