Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock

2020-03-13 Thread Jürgen Groß

On 13.03.20 12:39, Julien Grall wrote:

Hi Juergen,

On 13/03/2020 11:23, Jürgen Groß wrote:

On 13.03.20 11:40, Julien Grall wrote:

Hi Juergen,

On 13/03/2020 10:15, Jürgen Groß wrote:

On 13.03.20 11:02, Julien Grall wrote:

Hi Juergen,

On 13/03/2020 08:05, Juergen Gross wrote:

Similar to spinlocks preemption should be disabled while holding a
rwlock.

Signed-off-by: Juergen Gross 
---
  xen/include/xen/rwlock.h | 18 +-
  1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index 1c221dd0d9..4ee341a182 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -2,6 +2,7 @@
  #define __RWLOCK_H__
  #include 
+#include 
  #include 
  #include 
@@ -57,10 +58,12 @@ static inline int _read_trylock(rwlock_t *lock)
  cnts = atomic_read(>cnts);
  if ( likely(_can_read_lock(cnts)) )
  {


If you get preempted here, then it means the check below is likely 
going to fail. So I think it would be best to disable preemption 
before, to give more chance to succeed.


As preemption probability at this very point should be much lower than
that of held locks I think that is optimizing the wrong path.


Why so? Lock contention should be fairly limited or you already have 
a problem on your system. So preemption is more likely.


Today probability of preemption is 0.


I am aware of that...



Even with preemption added the probability to be preempted in a sequence
of about a dozen instructions is _very_ low.


... but I am not convinced of the low probability here.






I'm not
opposed doing the modification you are requesting, but would like to
hear a second opinion on that topic, especially as I'd need to add
another preempt_enable() call when following your advice.


I don't really see the problem with adding a new preemption_enable() 
call. But the code can also be reworked to have only one call...


Its the dynamical path I'm speaking of. Accessing a local cpu variable
is not that cheap, and in case we add preemption in the future
preempt_enable() will become even more expensive.


Do you realize that the lock is most likely be uncontented? And if it 
were, the caller would likely not spin in a tight loop, otherwise it 
would have used read_lock().


So until you proved me otherwise (with numbers), this is 
micro-optimization that is not going to be seen in a workload.


Fine, in case you feeling so strong about that, I'll change it.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock

2020-03-13 Thread Julien Grall

Hi Juergen,

On 13/03/2020 11:23, Jürgen Groß wrote:

On 13.03.20 11:40, Julien Grall wrote:

Hi Juergen,

On 13/03/2020 10:15, Jürgen Groß wrote:

On 13.03.20 11:02, Julien Grall wrote:

Hi Juergen,

On 13/03/2020 08:05, Juergen Gross wrote:

Similar to spinlocks preemption should be disabled while holding a
rwlock.

Signed-off-by: Juergen Gross 
---
  xen/include/xen/rwlock.h | 18 +-
  1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index 1c221dd0d9..4ee341a182 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -2,6 +2,7 @@
  #define __RWLOCK_H__
  #include 
+#include 
  #include 
  #include 
@@ -57,10 +58,12 @@ static inline int _read_trylock(rwlock_t *lock)
  cnts = atomic_read(>cnts);
  if ( likely(_can_read_lock(cnts)) )
  {


If you get preempted here, then it means the check below is likely 
going to fail. So I think it would be best to disable preemption 
before, to give more chance to succeed.


As preemption probability at this very point should be much lower than
that of held locks I think that is optimizing the wrong path.


Why so? Lock contention should be fairly limited or you already have a 
problem on your system. So preemption is more likely.


Today probability of preemption is 0.


I am aware of that...



Even with preemption added the probability to be preempted in a sequence
of about a dozen instructions is _very_ low.


... but I am not convinced of the low probability here.






I'm not
opposed doing the modification you are requesting, but would like to
hear a second opinion on that topic, especially as I'd need to add
another preempt_enable() call when following your advice.


I don't really see the problem with adding a new preemption_enable() 
call. But the code can also be reworked to have only one call...


Its the dynamical path I'm speaking of. Accessing a local cpu variable
is not that cheap, and in case we add preemption in the future
preempt_enable() will become even more expensive.


Do you realize that the lock is most likely be uncontented? And if it 
were, the caller would likely not spin in a tight loop, otherwise it 
would have used read_lock().


So until you proved me otherwise (with numbers), this is 
micro-optimization that is not going to be seen in a workload.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock

2020-03-13 Thread Jürgen Groß

On 13.03.20 11:40, Julien Grall wrote:

Hi Juergen,

On 13/03/2020 10:15, Jürgen Groß wrote:

On 13.03.20 11:02, Julien Grall wrote:

Hi Juergen,

On 13/03/2020 08:05, Juergen Gross wrote:

Similar to spinlocks preemption should be disabled while holding a
rwlock.

Signed-off-by: Juergen Gross 
---
  xen/include/xen/rwlock.h | 18 +-
  1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index 1c221dd0d9..4ee341a182 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -2,6 +2,7 @@
  #define __RWLOCK_H__
  #include 
+#include 
  #include 
  #include 
@@ -57,10 +58,12 @@ static inline int _read_trylock(rwlock_t *lock)
  cnts = atomic_read(>cnts);
  if ( likely(_can_read_lock(cnts)) )
  {


If you get preempted here, then it means the check below is likely 
going to fail. So I think it would be best to disable preemption 
before, to give more chance to succeed.


As preemption probability at this very point should be much lower than
that of held locks I think that is optimizing the wrong path.


Why so? Lock contention should be fairly limited or you already have a 
problem on your system. So preemption is more likely.


Today probability of preemption is 0.

Even with preemption added the probability to be preempted in a sequence
of about a dozen instructions is _very_ low.




I'm not
opposed doing the modification you are requesting, but would like to
hear a second opinion on that topic, especially as I'd need to add
another preempt_enable() call when following your advice.


I don't really see the problem with adding a new preemption_enable() 
call. But the code can also be reworked to have only one call...


Its the dynamical path I'm speaking of. Accessing a local cpu variable
is not that cheap, and in case we add preemption in the future
preempt_enable() will become even more expensive.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock

2020-03-13 Thread Julien Grall

Hi Juergen,

On 13/03/2020 10:15, Jürgen Groß wrote:

On 13.03.20 11:02, Julien Grall wrote:

Hi Juergen,

On 13/03/2020 08:05, Juergen Gross wrote:

Similar to spinlocks preemption should be disabled while holding a
rwlock.

Signed-off-by: Juergen Gross 
---
  xen/include/xen/rwlock.h | 18 +-
  1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index 1c221dd0d9..4ee341a182 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -2,6 +2,7 @@
  #define __RWLOCK_H__
  #include 
+#include 
  #include 
  #include 
@@ -57,10 +58,12 @@ static inline int _read_trylock(rwlock_t *lock)
  cnts = atomic_read(>cnts);
  if ( likely(_can_read_lock(cnts)) )
  {


If you get preempted here, then it means the check below is likely 
going to fail. So I think it would be best to disable preemption 
before, to give more chance to succeed.


As preemption probability at this very point should be much lower than
that of held locks I think that is optimizing the wrong path.


Why so? Lock contention should be fairly limited or you already have a 
problem on your system. So preemption is more likely.



I'm not
opposed doing the modification you are requesting, but would like to
hear a second opinion on that topic, especially as I'd need to add
another preempt_enable() call when following your advice.


I don't really see the problem with adding a new preemption_enable() 
call. But the code can also be reworked to have only one call...


Cheers,


--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock

2020-03-13 Thread Jan Beulich
On 13.03.2020 11:15, Jürgen Groß wrote:
> On 13.03.20 11:02, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 13/03/2020 08:05, Juergen Gross wrote:
>>> Similar to spinlocks preemption should be disabled while holding a
>>> rwlock.
>>>
>>> Signed-off-by: Juergen Gross 
>>> ---
>>>   xen/include/xen/rwlock.h | 18 +-
>>>   1 file changed, 17 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
>>> index 1c221dd0d9..4ee341a182 100644
>>> --- a/xen/include/xen/rwlock.h
>>> +++ b/xen/include/xen/rwlock.h
>>> @@ -2,6 +2,7 @@
>>>   #define __RWLOCK_H__
>>>   #include 
>>> +#include 
>>>   #include 
>>>   #include 
>>> @@ -57,10 +58,12 @@ static inline int _read_trylock(rwlock_t *lock)
>>>   cnts = atomic_read(>cnts);
>>>   if ( likely(_can_read_lock(cnts)) )
>>>   {
>>
>> If you get preempted here, then it means the check below is likely going 
>> to fail. So I think it would be best to disable preemption before, to 
>> give more chance to succeed.
> 
> As preemption probability at this very point should be much lower than
> that of held locks I think that is optimizing the wrong path. I'm not
> opposed doing the modification you are requesting, but would like to
> hear a second opinion on that topic, especially as I'd need to add
> another preempt_enable() call when following your advice.

I can see arguments for both placements, and hence I'm fine either
way.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock

2020-03-13 Thread Jürgen Groß

On 13.03.20 11:02, Julien Grall wrote:

Hi Juergen,

On 13/03/2020 08:05, Juergen Gross wrote:

Similar to spinlocks preemption should be disabled while holding a
rwlock.

Signed-off-by: Juergen Gross 
---
  xen/include/xen/rwlock.h | 18 +-
  1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index 1c221dd0d9..4ee341a182 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -2,6 +2,7 @@
  #define __RWLOCK_H__
  #include 
+#include 
  #include 
  #include 
@@ -57,10 +58,12 @@ static inline int _read_trylock(rwlock_t *lock)
  cnts = atomic_read(>cnts);
  if ( likely(_can_read_lock(cnts)) )
  {


If you get preempted here, then it means the check below is likely going 
to fail. So I think it would be best to disable preemption before, to 
give more chance to succeed.


As preemption probability at this very point should be much lower than
that of held locks I think that is optimizing the wrong path. I'm not
opposed doing the modification you are requesting, but would like to
hear a second opinion on that topic, especially as I'd need to add
another preempt_enable() call when following your advice.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock

2020-03-13 Thread Julien Grall

Hi Juergen,

On 13/03/2020 08:05, Juergen Gross wrote:

Similar to spinlocks preemption should be disabled while holding a
rwlock.

Signed-off-by: Juergen Gross 
---
  xen/include/xen/rwlock.h | 18 +-
  1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index 1c221dd0d9..4ee341a182 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -2,6 +2,7 @@
  #define __RWLOCK_H__
  
  #include 

+#include 
  #include 
  #include 
  
@@ -57,10 +58,12 @@ static inline int _read_trylock(rwlock_t *lock)

  cnts = atomic_read(>cnts);
  if ( likely(_can_read_lock(cnts)) )
  {


If you get preempted here, then it means the check below is likely going 
to fail. So I think it would be best to disable preemption before, to 
give more chance to succeed.



+preempt_disable();
  cnts = (u32)atomic_add_return(_QR_BIAS, >cnts);
  if ( likely(_can_read_lock(cnts)) )
  return 1;
  atomic_sub(_QR_BIAS, >cnts);
+preempt_enable();
  }
  return 0;
  }
@@ -73,6 +76,7 @@ static inline void _read_lock(rwlock_t *lock)
  {
  u32 cnts;
  
+preempt_disable();

  cnts = atomic_add_return(_QR_BIAS, >cnts);
  if ( likely(_can_read_lock(cnts)) )
  return;
@@ -106,6 +110,7 @@ static inline void _read_unlock(rwlock_t *lock)
   * Atomically decrement the reader count
   */
  atomic_sub(_QR_BIAS, >cnts);
+preempt_enable();
  }
  
  static inline void _read_unlock_irq(rwlock_t *lock)

@@ -137,6 +142,7 @@ static inline unsigned int _write_lock_val(void)
  static inline void _write_lock(rwlock_t *lock)
  {
  /* Optimize for the unfair lock case where the fair flag is 0. */
+preempt_disable();
  if ( atomic_cmpxchg(>cnts, 0, _write_lock_val()) == 0 )
  return;
  
@@ -172,13 +178,21 @@ static inline int _write_trylock(rwlock_t *lock)

  if ( unlikely(cnts) )
  return 0;
  
-return likely(atomic_cmpxchg(>cnts, 0, _write_lock_val()) == 0);

+preempt_disable();


Similar remark as the read_trylock().


+if ( unlikely(atomic_cmpxchg(>cnts, 0, _write_lock_val()) != 0) )
+{
+preempt_enable();
+return 0;
+}
+
+return 1;
  }
  
  static inline void _write_unlock(rwlock_t *lock)

  {
  ASSERT(_is_write_locked_by_me(atomic_read(>cnts)));
  atomic_and(~(_QW_CPUMASK | _QW_WMASK), >cnts);
+preempt_enable();
  }
  
  static inline void _write_unlock_irq(rwlock_t *lock)

@@ -274,6 +288,7 @@ static inline void _percpu_read_lock(percpu_rwlock_t 
**per_cpudata,
  }
  
  /* Indicate this cpu is reading. */

+preempt_disable();
  this_cpu_ptr(per_cpudata) = percpu_rwlock;
  smp_mb();
  /* Check if a writer is waiting. */
@@ -308,6 +323,7 @@ static inline void _percpu_read_unlock(percpu_rwlock_t 
**per_cpudata,
  return;
  }
  this_cpu_ptr(per_cpudata) = NULL;
+preempt_enable();
  smp_wmb();
  }
  



Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock

2020-03-13 Thread Julien Grall



On 13/03/2020 08:48, Jan Beulich wrote:

On 13.03.2020 09:05, Juergen Gross wrote:

Similar to spinlocks preemption should be disabled while holding a
rwlock.

Signed-off-by: Juergen Gross 


Just one note/question:


@@ -308,6 +323,7 @@ static inline void _percpu_read_unlock(percpu_rwlock_t 
**per_cpudata,
  return;
  }
  this_cpu_ptr(per_cpudata) = NULL;
+preempt_enable();
  smp_wmb();
  }


It would seem more logical to me to insert this after the smp_wmb().


+1


Thoughts? I'll be happy to give my R-b once we've settled on this.

Jan



--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock

2020-03-13 Thread Jan Beulich
On 13.03.2020 09:05, Juergen Gross wrote:
> Similar to spinlocks preemption should be disabled while holding a
> rwlock.
> 
> Signed-off-by: Juergen Gross 

Just one note/question:

> @@ -308,6 +323,7 @@ static inline void _percpu_read_unlock(percpu_rwlock_t 
> **per_cpudata,
>  return;
>  }
>  this_cpu_ptr(per_cpudata) = NULL;
> +preempt_enable();
>  smp_wmb();
>  }

It would seem more logical to me to insert this after the smp_wmb().
Thoughts? I'll be happy to give my R-b once we've settled on this.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH 1/2] xen/rwlocks: call preempt_disable() when taking a rwlock

2020-03-13 Thread Juergen Gross
Similar to spinlocks preemption should be disabled while holding a
rwlock.

Signed-off-by: Juergen Gross 
---
 xen/include/xen/rwlock.h | 18 +-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index 1c221dd0d9..4ee341a182 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -2,6 +2,7 @@
 #define __RWLOCK_H__
 
 #include 
+#include 
 #include 
 #include 
 
@@ -57,10 +58,12 @@ static inline int _read_trylock(rwlock_t *lock)
 cnts = atomic_read(>cnts);
 if ( likely(_can_read_lock(cnts)) )
 {
+preempt_disable();
 cnts = (u32)atomic_add_return(_QR_BIAS, >cnts);
 if ( likely(_can_read_lock(cnts)) )
 return 1;
 atomic_sub(_QR_BIAS, >cnts);
+preempt_enable();
 }
 return 0;
 }
@@ -73,6 +76,7 @@ static inline void _read_lock(rwlock_t *lock)
 {
 u32 cnts;
 
+preempt_disable();
 cnts = atomic_add_return(_QR_BIAS, >cnts);
 if ( likely(_can_read_lock(cnts)) )
 return;
@@ -106,6 +110,7 @@ static inline void _read_unlock(rwlock_t *lock)
  * Atomically decrement the reader count
  */
 atomic_sub(_QR_BIAS, >cnts);
+preempt_enable();
 }
 
 static inline void _read_unlock_irq(rwlock_t *lock)
@@ -137,6 +142,7 @@ static inline unsigned int _write_lock_val(void)
 static inline void _write_lock(rwlock_t *lock)
 {
 /* Optimize for the unfair lock case where the fair flag is 0. */
+preempt_disable();
 if ( atomic_cmpxchg(>cnts, 0, _write_lock_val()) == 0 )
 return;
 
@@ -172,13 +178,21 @@ static inline int _write_trylock(rwlock_t *lock)
 if ( unlikely(cnts) )
 return 0;
 
-return likely(atomic_cmpxchg(>cnts, 0, _write_lock_val()) == 0);
+preempt_disable();
+if ( unlikely(atomic_cmpxchg(>cnts, 0, _write_lock_val()) != 0) )
+{
+preempt_enable();
+return 0;
+}
+
+return 1;
 }
 
 static inline void _write_unlock(rwlock_t *lock)
 {
 ASSERT(_is_write_locked_by_me(atomic_read(>cnts)));
 atomic_and(~(_QW_CPUMASK | _QW_WMASK), >cnts);
+preempt_enable();
 }
 
 static inline void _write_unlock_irq(rwlock_t *lock)
@@ -274,6 +288,7 @@ static inline void _percpu_read_lock(percpu_rwlock_t 
**per_cpudata,
 }
 
 /* Indicate this cpu is reading. */
+preempt_disable();
 this_cpu_ptr(per_cpudata) = percpu_rwlock;
 smp_mb();
 /* Check if a writer is waiting. */
@@ -308,6 +323,7 @@ static inline void _percpu_read_unlock(percpu_rwlock_t 
**per_cpudata,
 return;
 }
 this_cpu_ptr(per_cpudata) = NULL;
+preempt_enable();
 smp_wmb();
 }
 
-- 
2.16.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel