Re: [PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

2023-02-23 Thread Michael Ellerman
Kautuk Consul writes: > >> You are correct, the patch is wrong because it fails to account for IO >> accesses. > > Okay, I looked at the PowerPC ISA and found: > "The memory barrier provides an ordering function for the storage accesses > caused by Load, Store,and dcbz instructions that are

Re: [PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

2023-02-22 Thread Kautuk Consul
> You are correct, the patch is wrong because it fails to account for IO > accesses. Okay, I looked at the PowerPC ISA and found: "The memory barrier provides an ordering function for the storage accesses caused by Load, Store,and dcbz instructions that are executed by the processor executing

Re: [PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

2023-02-22 Thread Kautuk Consul
On 2023-02-22 20:16:10, Paul E. McKenney wrote: > On Thu, Feb 23, 2023 at 09:31:48AM +0530, Kautuk Consul wrote: > > On 2023-02-22 09:47:19, Paul E. McKenney wrote: > > > On Wed, Feb 22, 2023 at 02:33:44PM +0530, Kautuk Consul wrote: > > > > A link from ibm.com states: > > > > "Ensures that all

Re: [PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

2023-02-22 Thread Paul E. McKenney
On Thu, Feb 23, 2023 at 09:31:48AM +0530, Kautuk Consul wrote: > On 2023-02-22 09:47:19, Paul E. McKenney wrote: > > On Wed, Feb 22, 2023 at 02:33:44PM +0530, Kautuk Consul wrote: > > > A link from ibm.com states: > > > "Ensures that all instructions preceding the call to __lwsync > > > complete

Re: [PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

2023-02-22 Thread Kautuk Consul
On 2023-02-23 14:51:25, Michael Ellerman wrote: > Hi Paul, > > "Paul E. McKenney" writes: > > On Wed, Feb 22, 2023 at 02:33:44PM +0530, Kautuk Consul wrote: > >> A link from ibm.com states: > >> "Ensures that all instructions preceding the call to __lwsync > >> complete before any subsequent

Re: [PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

2023-02-22 Thread Kautuk Consul
On 2023-02-22 09:47:19, Paul E. McKenney wrote: > On Wed, Feb 22, 2023 at 02:33:44PM +0530, Kautuk Consul wrote: > > A link from ibm.com states: > > "Ensures that all instructions preceding the call to __lwsync > > complete before any subsequent store instructions can be executed > > on the

Re: [PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

2023-02-22 Thread Michael Ellerman
Hi Paul, "Paul E. McKenney" writes: > On Wed, Feb 22, 2023 at 02:33:44PM +0530, Kautuk Consul wrote: >> A link from ibm.com states: >> "Ensures that all instructions preceding the call to __lwsync >> complete before any subsequent store instructions can be executed >> on the processor that

Re: [PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

2023-02-22 Thread Paul E. McKenney
On Wed, Feb 22, 2023 at 02:33:44PM +0530, Kautuk Consul wrote: > A link from ibm.com states: > "Ensures that all instructions preceding the call to __lwsync > complete before any subsequent store instructions can be executed > on the processor that executed the function. Also, it ensures that >

Re: [PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

2023-02-22 Thread Kautuk Consul
> >> I'd have preferred with 'asm volatile' though. > > Sorry about that! That wasn't the intent of this patch. > > Probably another patch series should change this manner of #defining > > assembly. > > Why adding new line wrong then have to have another patch to make them > right ? > > When

Re: [PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

2023-02-22 Thread Christophe Leroy
Le 22/02/2023 à 10:46, Kautuk Consul a écrit : >> >> Reviewed-by: Christophe Leroy > Thanks! >> >>> --- >>>arch/powerpc/include/asm/barrier.h | 7 +++ >>>1 file changed, 7 insertions(+) >>> >>> diff --git a/arch/powerpc/include/asm/barrier.h >>> b/arch/powerpc/include/asm/barrier.h

Re: [PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

2023-02-22 Thread Kautuk Consul
On Wed, Feb 22, 2023 at 09:44:54AM +, Christophe Leroy wrote: > > > Le 22/02/2023 à 10:30, Kautuk Consul a écrit : > > Again, could some IBM/non-IBM employees do basic sanity kernel load > > testing on PPC64 UP and SMP systems for this patch? > > would deeply appreciate it! :-) > > And can

Re: [PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

2023-02-22 Thread Kautuk Consul
> > Reviewed-by: Christophe Leroy Thanks! > > > --- > > arch/powerpc/include/asm/barrier.h | 7 +++ > > 1 file changed, 7 insertions(+) > > > > diff --git a/arch/powerpc/include/asm/barrier.h > > b/arch/powerpc/include/asm/barrier.h > > index b95b666f0374..e088dacc0ee8 100644 > > ---

Re: [PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

2023-02-22 Thread Christophe Leroy
Le 22/02/2023 à 10:30, Kautuk Consul a écrit : > Again, could some IBM/non-IBM employees do basic sanity kernel load > testing on PPC64 UP and SMP systems for this patch? > would deeply appreciate it! :-) And can 'non-IBM' 'non employees' do something ? :) > > Thanks again! > Did you try on

Re: [PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

2023-02-22 Thread Christophe Leroy
Le 22/02/2023 à 10:03, Kautuk Consul a écrit : > A link from ibm.com states: > "Ensures that all instructions preceding the call to __lwsync > complete before any subsequent store instructions can be executed > on the processor that executed the function. Also, it ensures that > all load

Re: [PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

2023-02-22 Thread Kautuk Consul
Again, could some IBM/non-IBM employees do basic sanity kernel load testing on PPC64 UP and SMP systems for this patch? would deeply appreciate it! :-) Thanks again!

[PATCH v2] arch/powerpc/include/asm/barrier.h: redefine rmb and wmb to lwsync

2023-02-22 Thread Kautuk Consul
A link from ibm.com states: "Ensures that all instructions preceding the call to __lwsync complete before any subsequent store instructions can be executed on the processor that executed the function. Also, it ensures that all load instructions preceding the call to __lwsync complete before