Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-10-10 Thread Ingo Molnar

* Jeremy Fitzhardinge jer...@goop.org wrote:

 On 10/06/2011 10:40 AM, Jeremy Fitzhardinge wrote:
  However, it looks like locked xadd is also has better performance:  on
  my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
  than locked xadd, so that pretty much settles it unless you think
  there'd be a dramatic difference on an AMD system.
 
 Konrad measures add+mfence is about 65% slower on AMD Phenom as well.

xadd also results in smaller/tighter code, right?

Thanks,

Ingo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-10-10 Thread Stephan Diestelhorst
On Thursday 06 October 2011, 13:40:01 Jeremy Fitzhardinge wrote:
 On 10/06/2011 07:04 AM, Stephan Diestelhorst wrote:
  On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
  Which certainly should *work*, but from a conceptual standpoint, isn't
  it just *much* nicer to say we actually know *exactly* what the upper
  bits were.
  Well, we really do NOT want atomicity here. What we really rather want
  is sequentiality: free the lock, make the update visible, and THEN
  check if someone has gone sleeping on it.
 
  Atomicity only conveniently enforces that the three do not happen in a
  different order (with the store becoming visible after the checking
  load).
 
  This does not have to be atomic, since spurious wakeups are not a
  problem, in particular not with the FIFO-ness of ticket locks.
 
  For that the fence, additional atomic etc. would be IMHO much cleaner
  than the crazy overflow logic.
 
 All things being equal I'd prefer lock-xadd just because its easier to
 analyze the concurrency for, crazy overflow tests or no.  But if
 add+mfence turned out to be a performance win, then that would obviously
 tip the scales.
 
 However, it looks like locked xadd is also has better performance:  on
 my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
 than locked xadd, so that pretty much settles it unless you think
 there'd be a dramatic difference on an AMD system.

Indeed, the fences are usually slower than locked RMWs, in particular,
if you do not need to add an instruction. I originally missed that
amazing stunt the GCC pulled off with replacing the branch with carry
flag magic. It seems that two twisted minds have found each other
here :)

One of my concerns was adding a branch in here... so that is settled,
and if everybody else feels like this is easier to reason about...
go ahead :) (I'll keep my itch to myself then.)

Stephan
-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelho...@amd.com, Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo;
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551 

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-10-10 Thread Stephan Diestelhorst
On Monday 10 October 2011, 07:00:50 Stephan Diestelhorst wrote:
 On Thursday 06 October 2011, 13:40:01 Jeremy Fitzhardinge wrote:
  On 10/06/2011 07:04 AM, Stephan Diestelhorst wrote:
   On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
   Which certainly should *work*, but from a conceptual standpoint, isn't
   it just *much* nicer to say we actually know *exactly* what the upper
   bits were.
   Well, we really do NOT want atomicity here. What we really rather want
   is sequentiality: free the lock, make the update visible, and THEN
   check if someone has gone sleeping on it.
  
   Atomicity only conveniently enforces that the three do not happen in a
   different order (with the store becoming visible after the checking
   load).
  
   This does not have to be atomic, since spurious wakeups are not a
   problem, in particular not with the FIFO-ness of ticket locks.
  
   For that the fence, additional atomic etc. would be IMHO much cleaner
   than the crazy overflow logic.
  
  All things being equal I'd prefer lock-xadd just because its easier to
  analyze the concurrency for, crazy overflow tests or no.  But if
  add+mfence turned out to be a performance win, then that would obviously
  tip the scales.
  
  However, it looks like locked xadd is also has better performance:  on
  my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
  than locked xadd, so that pretty much settles it unless you think
  there'd be a dramatic difference on an AMD system.
 
 Indeed, the fences are usually slower than locked RMWs, in particular,
 if you do not need to add an instruction. I originally missed that
 amazing stunt the GCC pulled off with replacing the branch with carry
 flag magic. It seems that two twisted minds have found each other
 here :)
 
 One of my concerns was adding a branch in here... so that is settled,
 and if everybody else feels like this is easier to reason about...
 go ahead :) (I'll keep my itch to myself then.)

Just that I can't... if performance is a concern, adding the LOCK
prefix to the addb outperforms the xadd significantly:

With mean over 100 runs... this comes out as follows
(on my Phenom II)

locked-add   0.648500 s   80%
add-rmwtos   0.707700 s   88%
locked-xadd  0.807600 s  100%
add-barrier  1.27 s  157%

With huge read contention added in (as cheaply as possible):
locked-add.openmp  0.640700 s  84%
add-rmwtos.openmp  0.658400 s  86%
locked-xadd.openmp 0.763800 s 100%

And the numbers for write contention are crazy, but also feature the
locked-add version:
locked-add.openmp  0.571400 s  71%
add-rmwtos.openmp  0.699900 s  87%
locked-xadd.openmp 0.800200 s 100%

Stephan
-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelho...@amd.com, Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo;
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551 #include stdio.h

struct {
	unsigned char flag;
	unsigned char val;
} l;

int main(int argc, char **argv)
{
	int i;

	{
		{
			for (i = 0; i  1; i++) {
l.val += 2;
asm volatile(lock or $0x0,(%%rsp) : : : memory);
if (l.flag)
	break;
asm volatile( : : : memory);
			}
			l.flag = 1;
		}
	}
	return 0;
}
#include stdio.h

struct {
	unsigned char flag;
	unsigned char val;
} l;

int main(int argc, char **argv)
{
	int i;

#   pragma omp sections 
	{
#   pragma omp section
		{
			for (i = 0; i  1; i++) {
l.val += 2;
asm volatile(lock or $0x0,(%%rsp) : : : memory);
if (l.flag)
	break;
asm volatile( : : : memory);
			}
			l.flag = 1;
		}
#   pragma omp section
		while(!l.flag)
			asm volatile(:::memory);
			//asm volatile(lock orb $0x0, %0::m(l.flag):memory);
	}
	return 0;
}
#include stdio.h

struct {
	unsigned char flag;
	unsigned char val;
} l;

int main(int argc, char **argv)
{
	int i;
	{
		{
			for (i = 0; i  1; i++) {
asm volatile(lock addb %1, %0:+m(l.val):r((char)2):memory);
if (l.flag)
	break;
asm volatile( : : : memory);
			}
			l.flag = 1;
		}
	}
	return 0;
}
#include stdio.h

union {
	struct {
		unsigned char val;
		unsigned char flag;
	};
	unsigned short lock;
} l = { 0,0 };

int main(int argc, char **argv)
{
	int i;
#   pragma omp sections 
	{
#   pragma omp section
	{

			for (i = 0; i  1; i++) {
unsigned short inc = 2;
if (l.val = (0x100 - 2))
	inc += -1  8;
asm volatile(lock; xadd %1,%0 : +m (l.lock), +r (inc) : );
if (inc  0x100)
	break;
asm volatile( : : : memory);
			}
			l.flag = 1;
		}
#   pragma omp section
	while(!l.flag)
		asm volatile(:::memory);
			//asm volatile(lock orb $0x0, %0::m(l.flag):memory);
	}
	return 0;
}
#include stdio.h

struct {
	unsigned char flag;
	unsigned char val;
} l;

int main(int argc, char **argv)
{
	int i;
#   pragma omp sections 
	{
#   pragma omp section
		

Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-10-10 Thread Jeremy Fitzhardinge
On 10/10/2011 07:01 AM, Stephan Diestelhorst wrote:
 On Monday 10 October 2011, 07:00:50 Stephan Diestelhorst wrote:
 On Thursday 06 October 2011, 13:40:01 Jeremy Fitzhardinge wrote:
 On 10/06/2011 07:04 AM, Stephan Diestelhorst wrote:
 On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
 Which certainly should *work*, but from a conceptual standpoint, isn't
 it just *much* nicer to say we actually know *exactly* what the upper
 bits were.
 Well, we really do NOT want atomicity here. What we really rather want
 is sequentiality: free the lock, make the update visible, and THEN
 check if someone has gone sleeping on it.

 Atomicity only conveniently enforces that the three do not happen in a
 different order (with the store becoming visible after the checking
 load).

 This does not have to be atomic, since spurious wakeups are not a
 problem, in particular not with the FIFO-ness of ticket locks.

 For that the fence, additional atomic etc. would be IMHO much cleaner
 than the crazy overflow logic.
 All things being equal I'd prefer lock-xadd just because its easier to
 analyze the concurrency for, crazy overflow tests or no.  But if
 add+mfence turned out to be a performance win, then that would obviously
 tip the scales.

 However, it looks like locked xadd is also has better performance:  on
 my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
 than locked xadd, so that pretty much settles it unless you think
 there'd be a dramatic difference on an AMD system.
 Indeed, the fences are usually slower than locked RMWs, in particular,
 if you do not need to add an instruction. I originally missed that
 amazing stunt the GCC pulled off with replacing the branch with carry
 flag magic. It seems that two twisted minds have found each other
 here :)

 One of my concerns was adding a branch in here... so that is settled,
 and if everybody else feels like this is easier to reason about...
 go ahead :) (I'll keep my itch to myself then.)
 Just that I can't... if performance is a concern, adding the LOCK
 prefix to the addb outperforms the xadd significantly:

Hm, yes.  So using the lock prefix on add instead of the mfence?  Hm.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-10-10 Thread Jeremy Fitzhardinge
On 10/10/2011 12:32 AM, Ingo Molnar wrote:
 * Jeremy Fitzhardinge jer...@goop.org wrote:

 On 10/06/2011 10:40 AM, Jeremy Fitzhardinge wrote:
 However, it looks like locked xadd is also has better performance:  on
 my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
 than locked xadd, so that pretty much settles it unless you think
 there'd be a dramatic difference on an AMD system.
 Konrad measures add+mfence is about 65% slower on AMD Phenom as well.
 xadd also results in smaller/tighter code, right?

Not particularly, mostly because of the overflow-into-the-high-part
compensation.  But its only a couple of extra instructions, and no
conditionals, so I don't think it would have any concrete effect.

But, as Stephen points out, perhaps locked add is preferable to locked
xadd, since it also has the same barrier as mfence but has
(significantly!) better performance than either mfence or locked xadd...

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-10-06 Thread Stephan Diestelhorst
On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
 On Wed, Sep 28, 2011 at 11:08 AM, Stephan Diestelhorst
 stephan.diestelho...@amd.com wrote:
 
  I must have missed the part when this turned into the propose-the-
  craziest-way-that-this-still-works.contest :)
 
 So doing it just with the lock addb probably works fine, but I have
 to say that I personally shudder at the surround the locked addb by
 reads from the word, in order to approximate an atomic read of the
 upper bits.
 
 Because what you get is not really an atomic read of the upper bits,
 it's a ok, we'll get the worst case of somebody modifying the upper
 bits at the same time.
 
 Which certainly should *work*, but from a conceptual standpoint, isn't
 it just *much* nicer to say we actually know *exactly* what the upper
 bits were.

Well, we really do NOT want atomicity here. What we really rather want
is sequentiality: free the lock, make the update visible, and THEN
check if someone has gone sleeping on it.

Atomicity only conveniently enforces that the three do not happen in a
different order (with the store becoming visible after the checking
load).

This does not have to be atomic, since spurious wakeups are not a
problem, in particular not with the FIFO-ness of ticket locks.

For that the fence, additional atomic etc. would be IMHO much cleaner
than the crazy overflow logic.

 But I don't care all *that* deeply. I do agree that the xaddw trick is
 pretty tricky. I just happen to think that it's actually *less* tricky
 than read the upper bits separately and depend on subtle ordering
 issues with another writer that happens at the same time on another
 CPU.

Fair enough :)

Stephan
-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelho...@amd.com
Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-10-06 Thread Jeremy Fitzhardinge
On 10/06/2011 07:04 AM, Stephan Diestelhorst wrote:
 On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
 Which certainly should *work*, but from a conceptual standpoint, isn't
 it just *much* nicer to say we actually know *exactly* what the upper
 bits were.
 Well, we really do NOT want atomicity here. What we really rather want
 is sequentiality: free the lock, make the update visible, and THEN
 check if someone has gone sleeping on it.

 Atomicity only conveniently enforces that the three do not happen in a
 different order (with the store becoming visible after the checking
 load).

 This does not have to be atomic, since spurious wakeups are not a
 problem, in particular not with the FIFO-ness of ticket locks.

 For that the fence, additional atomic etc. would be IMHO much cleaner
 than the crazy overflow logic.

All things being equal I'd prefer lock-xadd just because its easier to
analyze the concurrency for, crazy overflow tests or no.  But if
add+mfence turned out to be a performance win, then that would obviously
tip the scales.

However, it looks like locked xadd is also has better performance:  on
my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
than locked xadd, so that pretty much settles it unless you think
there'd be a dramatic difference on an AMD system.

(On Nehalem it was much less dramatic 2% difference, but still in favour
of locked xadd.)

This is with dumb-as-rocks run it in a loop with time benchmark, but
the results are not very subtle.

J
#include stdio.h

struct {
	unsigned char flag;
	unsigned char val;
} l;

int main(int argc, char **argv)
{
	int i;

	for (i = 0; i  1; i++) {
		l.val += 2;
		asm volatile(mfence : : : memory);
		if (l.flag)
			break;
		asm volatile( : : : memory);
	}

	return 0;
}
#include stdio.h

union {
	struct {
		unsigned char val;
		unsigned char flag;
	};
	unsigned short lock;
} l = { 0,0 };

int main(int argc, char **argv)
{
	int i;

	for (i = 0; i  1; i++) {
		unsigned short inc = 2;
		if (l.val = (0x100 - 2))
			inc += -1  8;
		asm volatile(lock; xadd %1,%0 : +m (l.lock), +r (inc) : );
		if (inc  0x100)
			break;
		asm volatile( : : : memory);
	}

	return 0;
}


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-10-06 Thread Jeremy Fitzhardinge
On 10/06/2011 10:40 AM, Jeremy Fitzhardinge wrote:
 However, it looks like locked xadd is also has better performance:  on
 my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
 than locked xadd, so that pretty much settles it unless you think
 there'd be a dramatic difference on an AMD system.

Konrad measures add+mfence is about 65% slower on AMD Phenom as well.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Stephan Diestelhorst
On Tuesday 27 September 2011, 12:44:02 Jeremy Fitzhardinge wrote:
 On 09/27/2011 02:34 AM, Stephan Diestelhorst wrote:
  On Wednesday 14 September 2011, 17:31:32 Jeremy Fitzhardinge wrote:
  This series replaces the existing paravirtualized spinlock mechanism
  with a paravirtualized ticketlock mechanism.
  [...] 
  The unlock code is very straightforward:
 prev = *lock;
 __ticket_unlock_release(lock);
 if (unlikely(__ticket_in_slowpath(lock)))
 __ticket_unlock_slowpath(lock, prev);
 
  which generates:
 push   %rbp
 mov%rsp,%rbp
 
  movzwl (%rdi),%esi
 addb   $0x2,(%rdi)
  movzwl (%rdi),%eax
 testb  $0x1,%ah
 jne1f
 
 pop%rbp
 retq   
 
 ### SLOWPATH START
  1: movzwl (%rdi),%edx
 movzbl %dh,%ecx
 mov%edx,%eax
 and$-2,%ecx # clear TICKET_SLOWPATH_FLAG
 mov%cl,%dh
 cmp%dl,%cl  # test to see if lock is uncontended
 je 3f
 
  2: movzbl %dl,%esi
 callq  *__ticket_unlock_kick# kick anyone waiting
 pop%rbp
 retq   
 
  3: lock cmpxchg %dx,(%rdi) # use cmpxchg to safely write back flag
 jmp2b
 ### SLOWPATH END
  [...]
  Thoughts? Comments? Suggestions?
  You have a nasty data race in your code that can cause a losing
  acquirer to sleep forever, because its setting the TICKET_SLOWPATH flag
  can race with the lock holder releasing the lock.
 
  I used the code for the slow path from the GIT repo.
 
  Let me try to point out an interleaving:
 
  Lock is held by one thread, contains 0x0200.
 
  _Lock holder_   _Acquirer_
  mov$0x200,%eax
  lock xadd %ax,(%rdi)
  // ax:= 0x0200, lock:= 0x0400
  ...
  // this guy spins for a while, reading
  // the lock
  ...
  //trying to free the lock
  movzwl (%rdi),%esi (esi:=0x0400)
  addb   $0x2,(%rdi) (LOCAL copy of lock is now: 0x0402)
  movzwl (%rdi),%eax (local forwarding from previous store: eax := 0x0402)
  testb  $0x1,%ah(no wakeup of anybody)
  jne1f
 
  callq  *__ticket_lock_spinning
...
// __ticket_enter_slowpath(lock)
lock or (%rdi), $0x100
// (global view of lock := 0x0500)
  ...
ACCESS_ONCE(lock-tickets.head) == want
// (reads 0x00)
  ...
xen_poll_irq(irq); // goes to sleep
  ...
  [addb   $0x2,(%rdi)]
  // (becomes globally visible only now! global view of lock := 0x0502)
  ...
 
  Your code is reusing the (just about) safe version of unlocking a
  spinlock without understanding the effect that close has on later
  memory ordering. It may work on CPUs that cannot do narrow - wide
  store to load forwarding and have to make the addb store visible
  globally. This is an implementation artifact of specific uarches, and
  you mustn't rely on it, since our specified memory model allows looser
  behaviour.
 
 Ah, thanks for this observation.  I've seen this bug before when I
 didn't pay attention to the unlock W vs flag R ordering at all, and I
 was hoping the aliasing would be sufficient - and certainly this seems
 to have been OK on my Intel systems.  But you're saying that it will
 fail on current AMD systems?

I have tested this and have not seen it fail on publicly released AMD
systems. But as I have tried to point out, this does not mean it is
safe to do in software, because future microarchtectures may have more
capable forwarding engines.

 Have you tested this, or is this just from code analysis (which I
 agree with after reviewing the ordering rules in the Intel manual).

We have found a similar issue in Novell's PV ticket lock implementation
during internal product testing.

  Since you want to get that addb out to global memory before the second
  read, either use a LOCK prefix for it, add an MFENCE between addb and
  movzwl, or use a LOCKed instruction that will have a fencing effect
  (e.g., to top-of-stack)between addb and movzwl.
 
 Hm.  I don't really want to do any of those because it will probably
 have a significant effect on the unlock performance; I was really trying
 to avoid adding any more locked instructions.  A previous version of the
 code had an mfence in here, but I hit on the idea of using aliasing to
 get the ordering I want - but overlooked the possible effect of store
 forwarding.

Well, I'd be curious about the actual performance impact. If the store
needs to commit to memory due to aliasing anyways, this would slow down
execution, too. After all it is better to write 

Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Linus Torvalds
On Tue, Sep 27, 2011 at 9:44 AM, Jeremy Fitzhardinge jer...@goop.org wrote:

 I guess it comes down to throwing myself on the efficiency of some kind
 of fence instruction.  I guess an lfence would be sufficient; is that
 any more efficient than a full mfence?  At least I can make it so that
 its only present when pv ticket locks are actually in use, so it won't
 affect the native case.

Please don't play with fences, just do the final addb as a locked instruction.

In fact, don't even use an addb, this whole thing is disgusting:

  movzwl (%rdi),%esi (esi:=0x0400)
  addb   $0x2,(%rdi) (LOCAL copy of lock is now: 0x0402)
  movzwl (%rdi),%eax (local forwarding from previous store: eax := 0x0402)

just use lock xaddw there too.

The fact that the PV unlock is going to be much more expensive than a
regular native unlock is just a fact of life. It comes from
fundamentally caring about the old/new value, and has nothing to do
with aliasing. You care about the other bits, and it doesn't matter
where in memory they are.

The native unlock can do a simple addb (or incb), but that doesn't
mean the PV unlock can. There are no ordering issues with the final
unlock in the native case, because the native unlock is like the honey
badger: it don't care. It only cares that the store make it out *some*
day, but it doesn't care about what order the upper/lower bits get
updated. You do. So you have to use a locked access.

Good catch by Stephan.

 Linus
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Jan Beulich
 On 28.09.11 at 17:38, Linus Torvalds torva...@linux-foundation.org wrote:
 On Tue, Sep 27, 2011 at 9:44 AM, Jeremy Fitzhardinge jer...@goop.org wrote:

 I guess it comes down to throwing myself on the efficiency of some kind
 of fence instruction.  I guess an lfence would be sufficient; is that
 any more efficient than a full mfence?  At least I can make it so that
 its only present when pv ticket locks are actually in use, so it won't
 affect the native case.
 
 Please don't play with fences, just do the final addb as a locked 
 instruction.
 
 In fact, don't even use an addb, this whole thing is disgusting:
 
   movzwl (%rdi),%esi (esi:=0x0400)
   addb   $0x2,(%rdi) (LOCAL copy of lock is now: 0x0402)
   movzwl (%rdi),%eax (local forwarding from previous store: eax := 0x0402)
 
 just use lock xaddw there too.

I'm afraid that's not possible, as that might carry from the low 8 bits
into the upper 8 ones, which must be avoided.

Jan

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Linus Torvalds
On Wed, Sep 28, 2011 at 8:55 AM, Jan Beulich jbeul...@suse.com wrote:

 just use lock xaddw there too.

 I'm afraid that's not possible, as that might carry from the low 8 bits
 into the upper 8 ones, which must be avoided.

Oh damn, you're right. So I guess the right way to do things is with
cmpxchg, but some nasty mfence setup could do it too.

  Linus
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Jeremy Fitzhardinge
On 09/28/2011 09:10 AM, Linus Torvalds wrote:
 On Wed, Sep 28, 2011 at 8:55 AM, Jan Beulich jbeul...@suse.com wrote:
 just use lock xaddw there too.
 I'm afraid that's not possible, as that might carry from the low 8 bits
 into the upper 8 ones, which must be avoided.
 Oh damn, you're right. So I guess the right way to do things is with
 cmpxchg, but some nasty mfence setup could do it too.

Could do something like:

if (ticket-head = 254)
prev = xadd(ticket-head_tail, 0xff02);
else
prev = xadd(ticket-head_tail, 0x0002);

to compensate for the overflow.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Jeremy Fitzhardinge
On 09/28/2011 06:58 AM, Stephan Diestelhorst wrote:
 I have tested this and have not seen it fail on publicly released AMD
 systems. But as I have tried to point out, this does not mean it is
 safe to do in software, because future microarchtectures may have more
 capable forwarding engines.

Sure.

 Have you tested this, or is this just from code analysis (which I
 agree with after reviewing the ordering rules in the Intel manual).
 We have found a similar issue in Novell's PV ticket lock implementation
 during internal product testing.

Jan may have picked it up from an earlier set of my patches.

 Since you want to get that addb out to global memory before the second
 read, either use a LOCK prefix for it, add an MFENCE between addb and
 movzwl, or use a LOCKed instruction that will have a fencing effect
 (e.g., to top-of-stack)between addb and movzwl.
 Hm.  I don't really want to do any of those because it will probably
 have a significant effect on the unlock performance; I was really trying
 to avoid adding any more locked instructions.  A previous version of the
 code had an mfence in here, but I hit on the idea of using aliasing to
 get the ordering I want - but overlooked the possible effect of store
 forwarding.
 Well, I'd be curious about the actual performance impact. If the store
 needs to commit to memory due to aliasing anyways, this would slow down
 execution, too. After all it is better to write working than fast code,
 no? ;-)

Rule of thumb is that AMD tends to do things like lock and fence more
efficiently than Intel - at least historically.  I don't know if that's
still true for current Intel microarchitectures.

 I guess it comes down to throwing myself on the efficiency of some kind
 of fence instruction.  I guess an lfence would be sufficient; is that
 any more efficient than a full mfence?
 An lfence should not be sufficient, since that essentially is a NOP on
 WB memory. You really want a full fence here, since the store needs to
 be published before reading the lock with the next load.

The Intel manual reads:

Reads cannot pass earlier LFENCE and MFENCE instructions.
Writes cannot pass earlier LFENCE, SFENCE, and MFENCE instructions.
LFENCE instructions cannot pass earlier reads.

Which I interpreted as meaning that an lfence would prevent forwarding. 
But I guess it doesn't say lfence instructions cannot pass earlier
writes, which means that the lfence could logically happen before the
write, thereby allowing forwarding?  Or should I be reading this some
other way?

 Could you give me a pointer to AMD's description of the ordering rules?
 They should be in AMD64 Architecture Programmer's Manual Volume 2:
 System Programming, Section 7.2 Multiprocessor Memory Access Ordering.

 http://developer.amd.com/documentation/guides/pages/default.aspx#manuals

 Let me know if you have some clarifying suggestions. We are currently
 revising these documents...

I find the English descriptions of these kinds of things frustrating to
read because of ambiguities in the precise meaning of words like pass,
ahead, behind in these contexts.  I find the prose useful to get an
overview, but when I have a specific question I wonder if something more
formal would be useful.
I guess it's implied that anything that is not prohibited by the
ordering rules is allowed, but it wouldn't hurt to say it explicitly.
That said, the AMD description seems clearer and more explicit than the
Intel manual (esp since it specifically discusses the problem here).

Thanks,
J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Linus Torvalds
On Wed, Sep 28, 2011 at 9:47 AM, Jeremy Fitzhardinge jer...@goop.org wrote:

 Could do something like:

        if (ticket-head = 254)
                prev = xadd(ticket-head_tail, 0xff02);
        else
                prev = xadd(ticket-head_tail, 0x0002);

 to compensate for the overflow.

Oh wow. You havge an even more twisted mind than I do.

I guess that will work, exactly because we control head and thus can
know about the overflow in the low byte. But boy is that ugly ;)

But at least you wouldn't need to do the loop with cmpxchg. So it's
twisted and ugly, but migth be practical.

   Linus
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread H. Peter Anvin
On 09/28/2011 10:22 AM, Linus Torvalds wrote:
 On Wed, Sep 28, 2011 at 9:47 AM, Jeremy Fitzhardinge jer...@goop.org wrote:

 Could do something like:

if (ticket-head = 254)
prev = xadd(ticket-head_tail, 0xff02);
else
prev = xadd(ticket-head_tail, 0x0002);

 to compensate for the overflow.
 
 Oh wow. You havge an even more twisted mind than I do.
 
 I guess that will work, exactly because we control head and thus can
 know about the overflow in the low byte. But boy is that ugly ;)
 
 But at least you wouldn't need to do the loop with cmpxchg. So it's
 twisted and ugly, but migth be practical.
 

I suspect it should be coded as -254 in order to use a short immediate
if that is even possible...

-hpa

-- 
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel.  I don't speak on their behalf.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Jeremy Fitzhardinge
On 09/28/2011 10:24 AM, H. Peter Anvin wrote:
 On 09/28/2011 10:22 AM, Linus Torvalds wrote:
 On Wed, Sep 28, 2011 at 9:47 AM, Jeremy Fitzhardinge jer...@goop.org wrote:
 Could do something like:

if (ticket-head = 254)
prev = xadd(ticket-head_tail, 0xff02);
else
prev = xadd(ticket-head_tail, 0x0002);

 to compensate for the overflow.
 Oh wow. You havge an even more twisted mind than I do.

 I guess that will work, exactly because we control head and thus can
 know about the overflow in the low byte. But boy is that ugly ;)

 But at least you wouldn't need to do the loop with cmpxchg. So it's
 twisted and ugly, but migth be practical.

 I suspect it should be coded as -254 in order to use a short immediate
 if that is even possible...

I'm about to test:

static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
{
if (TICKET_SLOWPATH_FLAG  
unlikely(arch_static_branch(paravirt_ticketlocks_enabled))) {
arch_spinlock_t prev;
__ticketpair_t inc = TICKET_LOCK_INC;

if (lock-tickets.head = (1  TICKET_SHIFT) - TICKET_LOCK_INC)
inc += -1  TICKET_SHIFT;

prev.head_tail = xadd(lock-head_tail, inc);

if (prev.tickets.tail  TICKET_SLOWPATH_FLAG)
__ticket_unlock_slowpath(lock, prev);
} else
__ticket_unlock_release(lock);
}

Which, frankly, is not something I particularly want to put my name to.

It makes gcc go into paroxysms of trickiness:

 4a8:   80 3f fecmpb   $0xfe,(%rdi)
 4ab:   19 f6   sbb%esi,%esi
 4ad:   66 81 e6 00 01  and$0x100,%si
 4b2:   66 81 ee fe 00  sub$0xfe,%si
 4b7:   f0 66 0f c1 37  lock xadd %si,(%rdi)

...which is pretty neat, actually.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Stephan Diestelhorst
On Wednesday 28 September 2011 19:50:08 Jeremy Fitzhardinge wrote:
 On 09/28/2011 10:24 AM, H. Peter Anvin wrote:
  On 09/28/2011 10:22 AM, Linus Torvalds wrote:
  On Wed, Sep 28, 2011 at 9:47 AM, Jeremy Fitzhardinge jer...@goop.org 
  wrote:
  Could do something like:
 
 if (ticket-head = 254)
 prev = xadd(ticket-head_tail, 0xff02);
 else
 prev = xadd(ticket-head_tail, 0x0002);
 
  to compensate for the overflow.
  Oh wow. You havge an even more twisted mind than I do.
 
  I guess that will work, exactly because we control head and thus can
  know about the overflow in the low byte. But boy is that ugly ;)
 
  But at least you wouldn't need to do the loop with cmpxchg. So it's
  twisted and ugly, but migth be practical.
 
  I suspect it should be coded as -254 in order to use a short immediate
  if that is even possible...
 
 I'm about to test:
 
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
   if (TICKET_SLOWPATH_FLAG  
 unlikely(arch_static_branch(paravirt_ticketlocks_enabled))) {
   arch_spinlock_t prev;
   __ticketpair_t inc = TICKET_LOCK_INC;
 
   if (lock-tickets.head = (1  TICKET_SHIFT) - TICKET_LOCK_INC)
   inc += -1  TICKET_SHIFT;
 
   prev.head_tail = xadd(lock-head_tail, inc);
 
   if (prev.tickets.tail  TICKET_SLOWPATH_FLAG)
   __ticket_unlock_slowpath(lock, prev);
   } else
   __ticket_unlock_release(lock);
 }
 
 Which, frankly, is not something I particularly want to put my name to.

I must have missed the part when this turned into the propose-the-
craziest-way-that-this-still-works.contest :)

What is wrong with converting the original addb into a lock addb? The
crazy wrap around tricks add a conditional and lots of headache. The
lock addb/w is clean. We are paying an atomic in both cases, so I just
don't see the benefit of the second solution.

Stephan
-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelho...@amd.com, Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo;
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551 

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Stephan Diestelhorst
On Wednesday 28 September 2011 18:44:25 Jeremy Fitzhardinge wrote:
 On 09/28/2011 06:58 AM, Stephan Diestelhorst wrote:
  I guess it comes down to throwing myself on the efficiency of some kind
  of fence instruction.  I guess an lfence would be sufficient; is that
  any more efficient than a full mfence?
  An lfence should not be sufficient, since that essentially is a NOP on
  WB memory. You really want a full fence here, since the store needs to
  be published before reading the lock with the next load.
 
 The Intel manual reads:
 
 Reads cannot pass earlier LFENCE and MFENCE instructions.
 Writes cannot pass earlier LFENCE, SFENCE, and MFENCE instructions.
 LFENCE instructions cannot pass earlier reads.
 
 Which I interpreted as meaning that an lfence would prevent forwarding. 
 But I guess it doesn't say lfence instructions cannot pass earlier
 writes, which means that the lfence could logically happen before the
 write, thereby allowing forwarding?  Or should I be reading this some
 other way?

Indeed. You are reading this the right way. 

  Could you give me a pointer to AMD's description of the ordering rules?
  They should be in AMD64 Architecture Programmer's Manual Volume 2:
  System Programming, Section 7.2 Multiprocessor Memory Access Ordering.
 
  http://developer.amd.com/documentation/guides/pages/default.aspx#manuals
 
  Let me know if you have some clarifying suggestions. We are currently
  revising these documents...
 
 I find the English descriptions of these kinds of things frustrating to
 read because of ambiguities in the precise meaning of words like pass,
 ahead, behind in these contexts.  I find the prose useful to get an
 overview, but when I have a specific question I wonder if something more
 formal would be useful.

It would be, and some have started this efort:

http://www.cl.cam.ac.uk/~pes20/weakmemory/

But I am not sure whether that particular nasty forwarding case is
captured properly in their model It is on my list of things to check.

 I guess it's implied that anything that is not prohibited by the
 ordering rules is allowed, but it wouldn't hurt to say it explicitly.
 That said, the AMD description seems clearer and more explicit than the
 Intel manual (esp since it specifically discusses the problem here).

Thanks! Glad you like it :)

Stephan
-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelho...@amd.com, Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo;
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551 

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Jeremy Fitzhardinge
On 09/28/2011 11:08 AM, Stephan Diestelhorst wrote:
 On Wednesday 28 September 2011 19:50:08 Jeremy Fitzhardinge wrote:
 On 09/28/2011 10:24 AM, H. Peter Anvin wrote:
 On 09/28/2011 10:22 AM, Linus Torvalds wrote:
 On Wed, Sep 28, 2011 at 9:47 AM, Jeremy Fitzhardinge jer...@goop.org 
 wrote:
 Could do something like:

if (ticket-head = 254)
prev = xadd(ticket-head_tail, 0xff02);
else
prev = xadd(ticket-head_tail, 0x0002);

 to compensate for the overflow.
 Oh wow. You havge an even more twisted mind than I do.

 I guess that will work, exactly because we control head and thus can
 know about the overflow in the low byte. But boy is that ugly ;)

 But at least you wouldn't need to do the loop with cmpxchg. So it's
 twisted and ugly, but migth be practical.

 I suspect it should be coded as -254 in order to use a short immediate
 if that is even possible...
 I'm about to test:

 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
  if (TICKET_SLOWPATH_FLAG  
 unlikely(arch_static_branch(paravirt_ticketlocks_enabled))) {
  arch_spinlock_t prev;
  __ticketpair_t inc = TICKET_LOCK_INC;

  if (lock-tickets.head = (1  TICKET_SHIFT) - TICKET_LOCK_INC)
  inc += -1  TICKET_SHIFT;

  prev.head_tail = xadd(lock-head_tail, inc);

  if (prev.tickets.tail  TICKET_SLOWPATH_FLAG)
  __ticket_unlock_slowpath(lock, prev);
  } else
  __ticket_unlock_release(lock);
 }

 Which, frankly, is not something I particularly want to put my name to.
 I must have missed the part when this turned into the propose-the-
 craziest-way-that-this-still-works.contest :)

 What is wrong with converting the original addb into a lock addb? The
 crazy wrap around tricks add a conditional and lots of headache. The
 lock addb/w is clean. We are paying an atomic in both cases, so I just
 don't see the benefit of the second solution.

Well, it does end up generating surprisingly nice code.  And to be
honest, being able to do the unlock and atomically fetch the flag as one
operation makes it much easier to reason about.

I'll do a locked add variant as well to see how it turns out.

Do you think locked add is better than unlocked + mfence?

Thanks,
J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Linus Torvalds
On Wed, Sep 28, 2011 at 11:08 AM, Stephan Diestelhorst
stephan.diestelho...@amd.com wrote:

 I must have missed the part when this turned into the propose-the-
 craziest-way-that-this-still-works.contest :)

So doing it just with the lock addb probably works fine, but I have
to say that I personally shudder at the surround the locked addb by
reads from the word, in order to approximate an atomic read of the
upper bits.

Because what you get is not really an atomic read of the upper bits,
it's a ok, we'll get the worst case of somebody modifying the upper
bits at the same time.

Which certainly should *work*, but from a conceptual standpoint, isn't
it just *much* nicer to say we actually know *exactly* what the upper
bits were.

But I don't care all *that* deeply. I do agree that the xaddw trick is
pretty tricky. I just happen to think that it's actually *less* tricky
than read the upper bits separately and depend on subtle ordering
issues with another writer that happens at the same time on another
CPU.

So I can live with either form - as long as it works. I think it might
be easier to argue that the xaddw is guaranteed to work, because all
values at all points are unarguably atomic (yeah, we read the lower
bits nonatomically, but as the owner of the lock we know that nobody
else can write them).

 Linus
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Jeremy Fitzhardinge
On 09/28/2011 11:49 AM, Linus Torvalds wrote:
 But I don't care all *that* deeply. I do agree that the xaddw trick is
 pretty tricky. I just happen to think that it's actually *less* tricky
 than read the upper bits separately and depend on subtle ordering
 issues with another writer that happens at the same time on another
 CPU.

 So I can live with either form - as long as it works. I think it might
 be easier to argue that the xaddw is guaranteed to work, because all
 values at all points are unarguably atomic (yeah, we read the lower
 bits nonatomically, but as the owner of the lock we know that nobody
 else can write them).

Exactly.  I just did a locked add variant, and while the code looks a
little simpler, it definitely has more actual complexity to analyze.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-27 Thread Stephan Diestelhorst
On Wednesday 14 September 2011, 17:31:32 Jeremy Fitzhardinge wrote:
 This series replaces the existing paravirtualized spinlock mechanism
 with a paravirtualized ticketlock mechanism.
[...] 
 The unlock code is very straightforward:
   prev = *lock;
   __ticket_unlock_release(lock);
   if (unlikely(__ticket_in_slowpath(lock)))
   __ticket_unlock_slowpath(lock, prev);
 
 which generates:
   push   %rbp
   mov%rsp,%rbp
 
 movzwl (%rdi),%esi
   addb   $0x2,(%rdi)
 movzwl (%rdi),%eax
   testb  $0x1,%ah
   jne1f
 
   pop%rbp
   retq   
 
   ### SLOWPATH START
 1:movzwl (%rdi),%edx
   movzbl %dh,%ecx
   mov%edx,%eax
   and$-2,%ecx # clear TICKET_SLOWPATH_FLAG
   mov%cl,%dh
   cmp%dl,%cl  # test to see if lock is uncontended
   je 3f
 
 2:movzbl %dl,%esi
   callq  *__ticket_unlock_kick# kick anyone waiting
   pop%rbp
   retq   
 
 3:lock cmpxchg %dx,(%rdi) # use cmpxchg to safely write back flag
   jmp2b
   ### SLOWPATH END
[...]
 Thoughts? Comments? Suggestions?

You have a nasty data race in your code that can cause a losing
acquirer to sleep forever, because its setting the TICKET_SLOWPATH flag
can race with the lock holder releasing the lock.

I used the code for the slow path from the GIT repo.

Let me try to point out an interleaving:

Lock is held by one thread, contains 0x0200.

_Lock holder_   _Acquirer_
mov$0x200,%eax
lock xadd %ax,(%rdi)
// ax:= 0x0200, lock:= 0x0400
...
// this guy spins for a while, reading
// the lock
...
//trying to free the lock
movzwl (%rdi),%esi (esi:=0x0400)
addb   $0x2,(%rdi) (LOCAL copy of lock is now: 0x0402)
movzwl (%rdi),%eax (local forwarding from previous store: eax := 0x0402)
testb  $0x1,%ah(no wakeup of anybody)
jne1f

callq  *__ticket_lock_spinning
  ...
  // __ticket_enter_slowpath(lock)
  lock or (%rdi), $0x100
  // (global view of lock := 0x0500)
...
  ACCESS_ONCE(lock-tickets.head) == want
  // (reads 0x00)
...
  xen_poll_irq(irq); // goes to sleep
...
[addb   $0x2,(%rdi)]
// (becomes globally visible only now! global view of lock := 0x0502)
...

Your code is reusing the (just about) safe version of unlocking a
spinlock without understanding the effect that close has on later
memory ordering. It may work on CPUs that cannot do narrow - wide
store to load forwarding and have to make the addb store visible
globally. This is an implementation artifact of specific uarches, and
you mustn't rely on it, since our specified memory model allows looser
behaviour.

Since you want to get that addb out to global memory before the second
read, either use a LOCK prefix for it, add an MFENCE between addb and
movzwl, or use a LOCKed instruction that will have a fencing effect
(e.g., to top-of-stack)between addb and movzwl.

Stephan
-- 
Stephan Diestelhorst, AMD Operating System Research Center
stephan.diestelho...@amd.com
Tel. +49 (0)351 448 356 719

Advanced Micro Devices GmbH
Einsteinring 24
85609 Aschheim
Germany

Geschaeftsfuehrer: Alberto Bozzo 
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632, WEEE-Reg-Nr: DE 12919551


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-27 Thread Jeremy Fitzhardinge
On 09/27/2011 02:34 AM, Stephan Diestelhorst wrote:
 On Wednesday 14 September 2011, 17:31:32 Jeremy Fitzhardinge wrote:
 This series replaces the existing paravirtualized spinlock mechanism
 with a paravirtualized ticketlock mechanism.
 [...] 
 The unlock code is very straightforward:
  prev = *lock;
  __ticket_unlock_release(lock);
  if (unlikely(__ticket_in_slowpath(lock)))
  __ticket_unlock_slowpath(lock, prev);

 which generates:
  push   %rbp
  mov%rsp,%rbp

 movzwl (%rdi),%esi
  addb   $0x2,(%rdi)
 movzwl (%rdi),%eax
  testb  $0x1,%ah
  jne1f

  pop%rbp
  retq   

  ### SLOWPATH START
 1:   movzwl (%rdi),%edx
  movzbl %dh,%ecx
  mov%edx,%eax
  and$-2,%ecx # clear TICKET_SLOWPATH_FLAG
  mov%cl,%dh
  cmp%dl,%cl  # test to see if lock is uncontended
  je 3f

 2:   movzbl %dl,%esi
  callq  *__ticket_unlock_kick# kick anyone waiting
  pop%rbp
  retq   

 3:   lock cmpxchg %dx,(%rdi) # use cmpxchg to safely write back flag
  jmp2b
  ### SLOWPATH END
 [...]
 Thoughts? Comments? Suggestions?
 You have a nasty data race in your code that can cause a losing
 acquirer to sleep forever, because its setting the TICKET_SLOWPATH flag
 can race with the lock holder releasing the lock.

 I used the code for the slow path from the GIT repo.

 Let me try to point out an interleaving:

 Lock is held by one thread, contains 0x0200.

 _Lock holder_   _Acquirer_
 mov$0x200,%eax
 lock xadd %ax,(%rdi)
 // ax:= 0x0200, lock:= 0x0400
 ...
 // this guy spins for a while, reading
 // the lock
 ...
 //trying to free the lock
 movzwl (%rdi),%esi (esi:=0x0400)
 addb   $0x2,(%rdi) (LOCAL copy of lock is now: 0x0402)
 movzwl (%rdi),%eax (local forwarding from previous store: eax := 0x0402)
 testb  $0x1,%ah(no wakeup of anybody)
 jne1f

 callq  *__ticket_lock_spinning
   ...
   // __ticket_enter_slowpath(lock)
   lock or (%rdi), $0x100
   // (global view of lock := 0x0500)
   ...
   ACCESS_ONCE(lock-tickets.head) == want
   // (reads 0x00)
   ...
   xen_poll_irq(irq); // goes to sleep
 ...
 [addb   $0x2,(%rdi)]
 // (becomes globally visible only now! global view of lock := 0x0502)
 ...

 Your code is reusing the (just about) safe version of unlocking a
 spinlock without understanding the effect that close has on later
 memory ordering. It may work on CPUs that cannot do narrow - wide
 store to load forwarding and have to make the addb store visible
 globally. This is an implementation artifact of specific uarches, and
 you mustn't rely on it, since our specified memory model allows looser
 behaviour.

Ah, thanks for this observation.  I've seen this bug before when I
didn't pay attention to the unlock W vs flag R ordering at all, and I
was hoping the aliasing would be sufficient - and certainly this seems
to have been OK on my Intel systems.  But you're saying that it will
fail on current AMD systems?  Have you tested this, or is this just from
code analysis (which I agree with after reviewing the ordering rules in
the Intel manual).

 Since you want to get that addb out to global memory before the second
 read, either use a LOCK prefix for it, add an MFENCE between addb and
 movzwl, or use a LOCKed instruction that will have a fencing effect
 (e.g., to top-of-stack)between addb and movzwl.

Hm.  I don't really want to do any of those because it will probably
have a significant effect on the unlock performance; I was really trying
to avoid adding any more locked instructions.  A previous version of the
code had an mfence in here, but I hit on the idea of using aliasing to
get the ordering I want - but overlooked the possible effect of store
forwarding.

I guess it comes down to throwing myself on the efficiency of some kind
of fence instruction.  I guess an lfence would be sufficient; is that
any more efficient than a full mfence?  At least I can make it so that
its only present when pv ticket locks are actually in use, so it won't
affect the native case.

Could you give me a pointer to AMD's description of the ordering rules?

Thanks,
J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-14 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

[ Changes since last posting:
  - fix bugs exposed by the cold light of testing
- make the slow flag read in unlock cover the whole lock
  to force ordering WRT the unlock write
- when kicking on unlock, only look for the CPU *we* released
  (ie, head value the unlock resulted in), rather than re-reading
  the new head and kicking on that basis
  - enable PV ticketlocks in Xen HVM guests
]

NOTE: this series is available in:
  git://github.com/jsgf/linux-xen.git upstream/pvticketlock-slowflag
and is based on the previously posted ticketlock cleanup series in
  git://github.com/jsgf/linux-xen.git upstream/ticketlock-cleanup

This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism.

Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs).  This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct next
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning.  (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).

(See Thomas Friebel's talk Prevent Guests from Spinning Around
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).

PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:

- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
  iterations, then call out to the __ticket_lock_spinning() pvop,
  which allows a backend to block the vCPU rather than spinning.  This
  pvop can set the lock into slowpath state.

- When releasing a lock, if it is in slowpath state, the call
  __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
  lock is no longer in contention, it also clears the slowpath flag.

The slowpath state is stored in the LSB of the within the lock
ticket.  This has the effect of reducing the max number of CPUs by
half (so, a small ticket can deal with 128 CPUs, and large ticket
32768).

This series provides a Xen implementation, but it should be
straightforward to add a KVM implementation as well.

Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.

The fast path (taking an uncontended lock which isn't in slowpath
state) is optimal, identical to the non-paravirtualized case.

The inner part of ticket lock code becomes:
inc = xadd(lock-tickets, inc);
inc.tail = ~TICKET_SLOWPATH_FLAG;

if (likely(inc.head == inc.tail))
goto out;

for (;;) {
unsigned count = SPIN_THRESHOLD;

do {
if (ACCESS_ONCE(lock-tickets.head) == inc.tail)
goto out;
cpu_relax();
} while (--count);
__ticket_lock_spinning(lock, inc.tail);
}
out:barrier();

which results in:
push   %rbp
mov%rsp,%rbp

mov$0x200,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp%al,%dl
jne1f

pop%rbp
retq   

### SLOWPATH START
1:  and$-2,%edx
movzbl %dl,%esi

2:  mov$0x800,%eax
jmp4f

3:  pause  
sub$0x1,%eax
je 5f

4:  movzbl (%rdi),%ecx
cmp%cl,%dl
jne3b

pop%rbp
retq   

5:  callq  *__ticket_lock_spinning
jmp2b
### SLOWPATH END

with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:

push   %rbp
mov%rsp,%rbp

mov$0x100,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp%al,%dl
jne1f

pop%rbp
retq   

### SLOWPATH START
1:  pause  
movzbl (%rdi),%eax
cmp%dl,%al
jne1b

pop%rbp
retq   
### SLOWPATH END

The unlock code is very straightforward:
prev = *lock;
__ticket_unlock_release(lock);
if (unlikely(__ticket_in_slowpath(lock)))
__ticket_unlock_slowpath(lock, prev);

which generates:
push   %rbp
mov%rsp,%rbp

movzwl (%rdi),%esi
addb   $0x2,(%rdi)
movzwl (%rdi),%eax
testb  $0x1,%ah