On Thursday 06 December 2007 20:33, Li Zefan wrote:
The casting is safe only when the list_head member is the
first member of the structure.
Even so, I don't think too safe :) It might technically work,
but it could break more easily.
So even if you find places where list_head is the first
, while the lack of a memory
barrier could allow incorrect results during normal functioning operation
as well.
Convert it to use a regular spinlock instead.
Signed-off-by: Nick Piggin [EMAIL PROTECTED]
Acked-by: Benjamin Herrenschmidt [EMAIL PROTECTED]
---
Index: linux-2.6/arch/ppc/xmon/start.c
This isn't a bugfix, but may help performance slightly...
--
powerpc 64-bit hash pte lock bit is an actual lock, so it can take advantage
of lock bitops for slightly more optimal memory barriers (can avoid an lwsync
in the trylock).
Signed-off-by: Nick Piggin [EMAIL PROTECTED]
Acked-by: Benjamin
On Tue, Nov 20, 2007 at 04:28:02PM +1100, Paul Mackerras wrote:
Nick Piggin writes:
xmon uses a bit lock spinlock but doesn't close the critical section
when releasing it. It doesn't seem like a big deal because it will
eventually break out of the lock anyway, but presumably that's only
On Tue, Nov 20, 2007 at 05:08:24PM +1100, Benjamin Herrenschmidt wrote:
On Tue, 2007-11-20 at 06:09 +0100, Nick Piggin wrote:
This isn't a bugfix, but may help performance slightly...
--
powerpc 64-bit hash pte lock bit is an actual lock, so it can take advantage
of lock bitops
On Friday 19 October 2007 12:32, Herbert Xu wrote:
First of all let's agree on some basic assumptions:
* A pair of spin lock/unlock subsumes the effect of a full mb.
Not unless you mean a pair of spin lock/unlock as in
2 spin lock/unlock pairs (4 operations).
*X = 10;
spin_lock(lock);
/* *Y
On Friday 19 October 2007 13:28, Herbert Xu wrote:
Nick Piggin [EMAIL PROTECTED] wrote:
First of all let's agree on some basic assumptions:
* A pair of spin lock/unlock subsumes the effect of a full mb.
Not unless you mean a pair of spin lock/unlock as in
2 spin lock/unlock pairs (4
On Wednesday 12 September 2007 20:01, Greg KH wrote:
On Wed, Sep 12, 2007 at 07:32:07AM +0200, Robert Schwebel wrote:
On Tue, Sep 11, 2007 at 11:43:17AM +0200, Heiko Schocher wrote:
I have developed a device driver and use the sysFS to export some
registers to userspace.
Uuuh, uggly.
On Thu, Aug 30, 2007 at 02:42:41PM -0500, Brent Casavant wrote:
On Thu, 30 Aug 2007, Nick Piggin wrote:
I don't know whether this is exactly a correct implementation of
Linux's barrier semantics. On one hand, wmb _is_ ordering the stores
as they come out of the CPU; on the other, it isn't
On Thu, Aug 23, 2007 at 07:57:20PM +0200, Segher Boessenkool wrote:
The powerpc kernel needs to have full sync insns in every I/O
accessor in order to enforce all the ordering rules Linux demands.
It's a bloody shame, but the alternative would be to make the
barriers lots more expensive. A
On Thu, Aug 23, 2007 at 09:16:42AM -0700, Linus Torvalds wrote:
On Thu, 23 Aug 2007, Nick Piggin wrote:
Also, FWIW, there are some advantages of deferring the mmiowb thingy
until the point of unlock.
And that is exactly what ppc64 does.
But you're missing a big point: for 99.9
On Thu, Aug 23, 2007 at 09:56:16AM -0700, Jesse Barnes wrote:
On Thursday, August 23, 2007 12:27 am Benjamin Herrenschmidt wrote:
Of course, the normal memory barrier would usually be a
spin_unlock() or something like that, not a wmb(). In fact, I
don't think the powerpc implementation
On Thu, Aug 23, 2007 at 06:27:42PM +0200, Benjamin Herrenschmidt wrote:
On Thu, 2007-08-23 at 09:16 -0700, Linus Torvalds wrote:
On Thu, 23 Aug 2007, Nick Piggin wrote:
Also, FWIW, there are some advantages of deferring the mmiowb thingy
until the point of unlock
On Wed, Aug 22, 2007 at 12:02:11PM -0700, Jesse Barnes wrote:
On Wednesday, August 22, 2007 11:07 am Linus Torvalds wrote:
It really seems like it is some completely different concept from a
barrier. And it shows, on the platform where it really matters
(sn2), where the thing actually
On Wed, Aug 22, 2007 at 07:57:56PM -0700, Linus Torvalds wrote:
On Thu, 23 Aug 2007, Nick Piggin wrote:
Irix actually had an io_unlock() routine that did this
implicitly, but iirc that was shot down for Linux...
Why was it shot down? Seems like a pretty good idea to me
On Wed, Aug 22, 2007 at 07:57:56PM -0700, Linus Torvalds wrote:
On Thu, 23 Aug 2007, Nick Piggin wrote:
Irix actually had an io_unlock() routine that did this
implicitly, but iirc that was shot down for Linux...
Why was it shot down? Seems like a pretty good idea to me
On Tue, Aug 21, 2007 at 09:43:17PM +0200, Segher Boessenkool wrote:
#define mb() __asm__ __volatile__ (sync : : : memory)
-#define rmb() __asm__ __volatile__ (__stringify(LWSYNC) : : :
memory)
+#define rmb() __asm__ __volatile__ (sync : : : memory)
#define wmb() __asm__ __volatile__
On Wed, Aug 22, 2007 at 05:29:50AM +0200, Segher Boessenkool wrote:
If this isn't causing any problems maybe there
is some loigic we are overlooking?
The I/O accessor functions enforce the necessary ordering
already I believe.
Ah, it looks like you might be right, IO should appear to go
On Wed, Aug 22, 2007 at 05:33:16AM +0200, Segher Boessenkool wrote:
The I/O accessor functions enforce the necessary ordering
already I believe.
Hmm, I never followed those discussions last year about IO ordering,
and
I can't see where (if) it was documented anywhere :(
The comments in
Hi,
I'm ignorant when it comes to IO access, so I hope this isn't rubbish (if
it is, I would appreciate being corrected).
It took me more than a glance to see what the difference is supposed to be
between wmb() and mmiowb(). I think especially because mmiowb isn't really
like a write barrier.
Sorry, this is patch 2/2 of course.
On Tue, Aug 21, 2007 at 04:16:52AM +0200, Nick Piggin wrote:
This one is perhaps not as straightforward. I'm pretty limited in the types
of powerpc machines I can test with, so I don't actually know whether this
is the right thing to do on power5/6 etc. I
101 - 121 of 121 matches
Mail list logo