On Wed, Mar 05, 2014 at 10:34:32PM +0100, Stefan Richter wrote:
> On Feb 21 Stefan Richter wrote:
> > On Feb 20 Tejun Heo wrote:
> > > PREPARE_[DELAYED_]WORK() are being phased out. They have few users
> > > and a nasty surprise in terms of reentrancy guarantee as workqueue
> > > considers work
On Wed, Mar 05, 2014 at 10:34:32PM +0100, Stefan Richter wrote:
On Feb 21 Stefan Richter wrote:
On Feb 20 Tejun Heo wrote:
PREPARE_[DELAYED_]WORK() are being phased out. They have few users
and a nasty surprise in terms of reentrancy guarantee as workqueue
considers work items to be
On Feb 21 Stefan Richter wrote:
> On Feb 20 Tejun Heo wrote:
> > PREPARE_[DELAYED_]WORK() are being phased out. They have few users
> > and a nasty surprise in terms of reentrancy guarantee as workqueue
> > considers work items to be different if they don't have the same work
> > function.
> >
>
On Feb 21 Stefan Richter wrote:
On Feb 20 Tejun Heo wrote:
PREPARE_[DELAYED_]WORK() are being phased out. They have few users
and a nasty surprise in terms of reentrancy guarantee as workqueue
considers work items to be different if they don't have the same work
function.
firewire
On Mon, Feb 24, 2014 at 01:32:54AM +0100, Stefan Richter wrote:
> On Feb 23 Paul E. McKenney wrote:
> >>> Please see below for a patch against the current version of
> >>> Documentation/memory-barriers.txt. Does this update help?
>
> Thank you, this clarifies it.
>
> [...]
> A new nit:
> > +The
On Sun, Feb 23, 2014 at 07:09:55PM -0500, Peter Hurley wrote:
> On 02/23/2014 06:50 PM, Paul E. McKenney wrote:
> >On Sun, Feb 23, 2014 at 03:35:31PM -0500, Peter Hurley wrote:
> >>Hi Paul,
> >>
> >>On 02/23/2014 11:37 AM, Paul E. McKenney wrote:
> >>>commit
On Sun, Feb 23, 2014 at 07:09:55PM -0500, Peter Hurley wrote:
On 02/23/2014 06:50 PM, Paul E. McKenney wrote:
On Sun, Feb 23, 2014 at 03:35:31PM -0500, Peter Hurley wrote:
Hi Paul,
On 02/23/2014 11:37 AM, Paul E. McKenney wrote:
commit aba6b0e82c9de53eb032844f1932599f148ff68d
Author: Paul
On Mon, Feb 24, 2014 at 01:32:54AM +0100, Stefan Richter wrote:
On Feb 23 Paul E. McKenney wrote:
Please see below for a patch against the current version of
Documentation/memory-barriers.txt. Does this update help?
Thank you, this clarifies it.
[...]
A new nit:
+The operations will
On Feb 23 Paul E. McKenney wrote:
>>> Please see below for a patch against the current version of
>>> Documentation/memory-barriers.txt. Does this update help?
Thank you, this clarifies it.
[...]
A new nit:
> +The operations will always occur in one of the following orders:
>
> - STORE
On 02/23/2014 06:50 PM, Paul E. McKenney wrote:
On Sun, Feb 23, 2014 at 03:35:31PM -0500, Peter Hurley wrote:
Hi Paul,
On 02/23/2014 11:37 AM, Paul E. McKenney wrote:
commit aba6b0e82c9de53eb032844f1932599f148ff68d
Author: Paul E. McKenney
Date: Sun Feb 23 08:34:24 2014 -0800
On Sun, Feb 23, 2014 at 03:35:31PM -0500, Peter Hurley wrote:
> Hi Paul,
>
> On 02/23/2014 11:37 AM, Paul E. McKenney wrote:
> >commit aba6b0e82c9de53eb032844f1932599f148ff68d
> >Author: Paul E. McKenney
> >Date: Sun Feb 23 08:34:24 2014 -0800
> >
> > Documentation/memory-barriers.txt:
Hi James,
On 02/23/2014 03:05 PM, James Bottomley wrote:
On Sat, 2014-02-22 at 14:03 -0500, Peter Hurley wrote:
If it is necessary for a RELEASE-ACQUIRE pair to produce a full barrier, the
ACQUIRE can be followed by an smp_mb__after_unlock_lock() invocation. This
will produce a full barrier
Hi Paul,
On 02/23/2014 11:37 AM, Paul E. McKenney wrote:
commit aba6b0e82c9de53eb032844f1932599f148ff68d
Author: Paul E. McKenney
Date: Sun Feb 23 08:34:24 2014 -0800
Documentation/memory-barriers.txt: Clarify release/acquire ordering
This commit fixes a couple of typos and
On Sat, 2014-02-22 at 14:03 -0500, Peter Hurley wrote:
> If it is necessary for a RELEASE-ACQUIRE pair to produce a full barrier, the
> ACQUIRE can be followed by an smp_mb__after_unlock_lock() invocation. This
> will produce a full barrier if either (a) the RELEASE and the ACQUIRE are
> executed
On Sun, Feb 23, 2014 at 02:23:03AM +0100, Stefan Richter wrote:
> Hi Paul,
>
> in patch "Documentation/memory-barriers.txt: Downgrade UNLOCK+BLOCK" (sic),
> you wrote:
> + Memory operations issued before the LOCK may be completed after the
> + LOCK operation has completed. An
On Sun, Feb 23, 2014 at 02:23:03AM +0100, Stefan Richter wrote:
Hi Paul,
in patch Documentation/memory-barriers.txt: Downgrade UNLOCK+BLOCK (sic),
you wrote:
+ Memory operations issued before the LOCK may be completed after the
+ LOCK operation has completed. An
On Sat, 2014-02-22 at 14:03 -0500, Peter Hurley wrote:
If it is necessary for a RELEASE-ACQUIRE pair to produce a full barrier, the
ACQUIRE can be followed by an smp_mb__after_unlock_lock() invocation. This
will produce a full barrier if either (a) the RELEASE and the ACQUIRE are
executed by
Hi Paul,
On 02/23/2014 11:37 AM, Paul E. McKenney wrote:
commit aba6b0e82c9de53eb032844f1932599f148ff68d
Author: Paul E. McKenney paul...@linux.vnet.ibm.com
Date: Sun Feb 23 08:34:24 2014 -0800
Documentation/memory-barriers.txt: Clarify release/acquire ordering
This commit fixes a
Hi James,
On 02/23/2014 03:05 PM, James Bottomley wrote:
On Sat, 2014-02-22 at 14:03 -0500, Peter Hurley wrote:
If it is necessary for a RELEASE-ACQUIRE pair to produce a full barrier, the
ACQUIRE can be followed by an smp_mb__after_unlock_lock() invocation. This
will produce a full barrier
On Sun, Feb 23, 2014 at 03:35:31PM -0500, Peter Hurley wrote:
Hi Paul,
On 02/23/2014 11:37 AM, Paul E. McKenney wrote:
commit aba6b0e82c9de53eb032844f1932599f148ff68d
Author: Paul E. McKenney paul...@linux.vnet.ibm.com
Date: Sun Feb 23 08:34:24 2014 -0800
On 02/23/2014 06:50 PM, Paul E. McKenney wrote:
On Sun, Feb 23, 2014 at 03:35:31PM -0500, Peter Hurley wrote:
Hi Paul,
On 02/23/2014 11:37 AM, Paul E. McKenney wrote:
commit aba6b0e82c9de53eb032844f1932599f148ff68d
Author: Paul E. McKenney paul...@linux.vnet.ibm.com
Date: Sun Feb 23
On Feb 23 Paul E. McKenney wrote:
Please see below for a patch against the current version of
Documentation/memory-barriers.txt. Does this update help?
Thank you, this clarifies it.
[...]
A new nit:
+The operations will always occur in one of the following orders:
- STORE *A,
Hi Paul,
in patch "Documentation/memory-barriers.txt: Downgrade UNLOCK+BLOCK" (sic),
you wrote:
+ Memory operations issued before the LOCK may be completed after the
+ LOCK operation has completed. An smp_mb__before_spinlock(), combined
+ with a following LOCK, orders prior loads
On 02/22/2014 01:52 PM, James Bottomley wrote:
On Sat, 2014-02-22 at 13:48 -0500, Peter Hurley wrote:
On 02/22/2014 01:43 PM, James Bottomley wrote:
On Fri, 2014-02-21 at 18:01 -0500, Peter Hurley wrote:
On 02/21/2014 11:57 AM, Tejun Heo wrote:
Yo,
On Fri, Feb 21, 2014 at 11:53:46AM -0500,
On Sat, 2014-02-22 at 13:48 -0500, Peter Hurley wrote:
> On 02/22/2014 01:43 PM, James Bottomley wrote:
> >
> > On Fri, 2014-02-21 at 18:01 -0500, Peter Hurley wrote:
> >> On 02/21/2014 11:57 AM, Tejun Heo wrote:
> >>> Yo,
> >>>
> >>> On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
>
On 02/22/2014 01:43 PM, James Bottomley wrote:
On Fri, 2014-02-21 at 18:01 -0500, Peter Hurley wrote:
On 02/21/2014 11:57 AM, Tejun Heo wrote:
Yo,
On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
Ok, I can do that. But AFAIK it'll have to be an smp_rmb(); there is
no mb__after
On Fri, 2014-02-21 at 18:01 -0500, Peter Hurley wrote:
> On 02/21/2014 11:57 AM, Tejun Heo wrote:
> > Yo,
> >
> > On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
> >> Ok, I can do that. But AFAIK it'll have to be an smp_rmb(); there is
> >> no mb__after unlock.
> >
> > We do have
On 02/22/2014 09:38 AM, Tejun Heo wrote:
Hey,
On Fri, Feb 21, 2014 at 06:46:24PM -0500, Peter Hurley wrote:
It's a long story but the short version is that
Documentation/memory-barriers.txt recently was overhauled to reflect
what cpus actually do and what the different archs actually
deliver.
Hey,
On Fri, Feb 21, 2014 at 06:46:24PM -0500, Peter Hurley wrote:
> It's a long story but the short version is that
> Documentation/memory-barriers.txt recently was overhauled to reflect
> what cpus actually do and what the different archs actually
> deliver.
>
> Turns out that unlock + lock is
Hey,
On Fri, Feb 21, 2014 at 06:46:24PM -0500, Peter Hurley wrote:
It's a long story but the short version is that
Documentation/memory-barriers.txt recently was overhauled to reflect
what cpus actually do and what the different archs actually
deliver.
Turns out that unlock + lock is not
On 02/22/2014 09:38 AM, Tejun Heo wrote:
Hey,
On Fri, Feb 21, 2014 at 06:46:24PM -0500, Peter Hurley wrote:
It's a long story but the short version is that
Documentation/memory-barriers.txt recently was overhauled to reflect
what cpus actually do and what the different archs actually
deliver.
On Fri, 2014-02-21 at 18:01 -0500, Peter Hurley wrote:
On 02/21/2014 11:57 AM, Tejun Heo wrote:
Yo,
On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
Ok, I can do that. But AFAIK it'll have to be an smp_rmb(); there is
no mb__after unlock.
We do have
On 02/22/2014 01:43 PM, James Bottomley wrote:
On Fri, 2014-02-21 at 18:01 -0500, Peter Hurley wrote:
On 02/21/2014 11:57 AM, Tejun Heo wrote:
Yo,
On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
Ok, I can do that. But AFAIK it'll have to be an smp_rmb(); there is
no mb__after
On Sat, 2014-02-22 at 13:48 -0500, Peter Hurley wrote:
On 02/22/2014 01:43 PM, James Bottomley wrote:
On Fri, 2014-02-21 at 18:01 -0500, Peter Hurley wrote:
On 02/21/2014 11:57 AM, Tejun Heo wrote:
Yo,
On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
Ok, I can do that.
On 02/22/2014 01:52 PM, James Bottomley wrote:
On Sat, 2014-02-22 at 13:48 -0500, Peter Hurley wrote:
On 02/22/2014 01:43 PM, James Bottomley wrote:
On Fri, 2014-02-21 at 18:01 -0500, Peter Hurley wrote:
On 02/21/2014 11:57 AM, Tejun Heo wrote:
Yo,
On Fri, Feb 21, 2014 at 11:53:46AM -0500,
Hi Paul,
in patch Documentation/memory-barriers.txt: Downgrade UNLOCK+BLOCK (sic),
you wrote:
+ Memory operations issued before the LOCK may be completed after the
+ LOCK operation has completed. An smp_mb__before_spinlock(), combined
+ with a following LOCK, orders prior loads
On 02/21/2014 06:18 PM, Tejun Heo wrote:
On Fri, Feb 21, 2014 at 06:01:29PM -0500, Peter Hurley wrote:
smp_mb__after_unlock_lock() is only for ordering memory operations
between two spin-locked sections on either the same lock or by
the same task/cpu. Like:
i = 1
spin_unlock(lock1)
On Fri, Feb 21, 2014 at 06:01:29PM -0500, Peter Hurley wrote:
> smp_mb__after_unlock_lock() is only for ordering memory operations
> between two spin-locked sections on either the same lock or by
> the same task/cpu. Like:
>
>i = 1
>spin_unlock(lock1)
>spin_lock(lock2)
>
On 02/21/2014 11:57 AM, Tejun Heo wrote:
Yo,
On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
Ok, I can do that. But AFAIK it'll have to be an smp_rmb(); there is
no mb__after unlock.
We do have smp_mb__after_unlock_lock().
[ After thinking about it some, I don't think
On Feb 20 Tejun Heo wrote:
> PREPARE_[DELAYED_]WORK() are being phased out. They have few users
> and a nasty surprise in terms of reentrancy guarantee as workqueue
> considers work items to be different if they don't have the same work
> function.
>
> firewire core-device and sbp2 have been
Yo,
On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
> Ok, I can do that. But AFAIK it'll have to be an smp_rmb(); there is
> no mb__after unlock.
We do have smp_mb__after_unlock_lock().
> [ After thinking about it some, I don't think preventing speculative
> writes before
Hi Tejun,
On 02/21/2014 08:06 AM, Tejun Heo wrote:
Hello,
On Fri, Feb 21, 2014 at 07:51:48AM -0500, Peter Hurley wrote:
I think the vast majority of kernel code which uses the workqueue
assumes there is a memory ordering guarantee.
Not really. Workqueues haven't even guaranteed
Hello,
On Fri, Feb 21, 2014 at 07:51:48AM -0500, Peter Hurley wrote:
> I think the vast majority of kernel code which uses the workqueue
> assumes there is a memory ordering guarantee.
Not really. Workqueues haven't even guaranteed non-reentrancy until
recently, forcing everybody to lock
On 02/21/2014 05:03 AM, Tejun Heo wrote:
On Fri, Feb 21, 2014 at 12:13:16AM -0500, Peter Hurley wrote:
CPU 0| CPU 1
|
INIT_WORK(fw_device_workfn) |
|
workfn = funcA |
On Fri, Feb 21, 2014 at 12:13:16AM -0500, Peter Hurley wrote:
> CPU 0| CPU 1
> |
> INIT_WORK(fw_device_workfn) |
> |
> workfn = funcA |
> queue_work_on() |
> .
On Feb 20 Tejun Heo wrote:
PREPARE_[DELAYED_]WORK() are being phased out. They have few users
and a nasty surprise in terms of reentrancy guarantee as workqueue
considers work items to be different if they don't have the same work
function.
firewire core-device and sbp2 have been been
On 02/21/2014 11:57 AM, Tejun Heo wrote:
Yo,
On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
Ok, I can do that. But AFAIK it'll have to be an smp_rmb(); there is
no mb__after unlock.
We do have smp_mb__after_unlock_lock().
[ After thinking about it some, I don't think
On Fri, Feb 21, 2014 at 06:01:29PM -0500, Peter Hurley wrote:
smp_mb__after_unlock_lock() is only for ordering memory operations
between two spin-locked sections on either the same lock or by
the same task/cpu. Like:
i = 1
spin_unlock(lock1)
spin_lock(lock2)
On 02/21/2014 06:18 PM, Tejun Heo wrote:
On Fri, Feb 21, 2014 at 06:01:29PM -0500, Peter Hurley wrote:
smp_mb__after_unlock_lock() is only for ordering memory operations
between two spin-locked sections on either the same lock or by
the same task/cpu. Like:
i = 1
spin_unlock(lock1)
On Fri, Feb 21, 2014 at 12:13:16AM -0500, Peter Hurley wrote:
CPU 0| CPU 1
|
INIT_WORK(fw_device_workfn) |
|
workfn = funcA |
queue_work_on() |
.
On 02/21/2014 05:03 AM, Tejun Heo wrote:
On Fri, Feb 21, 2014 at 12:13:16AM -0500, Peter Hurley wrote:
CPU 0| CPU 1
|
INIT_WORK(fw_device_workfn) |
|
workfn = funcA |
Hello,
On Fri, Feb 21, 2014 at 07:51:48AM -0500, Peter Hurley wrote:
I think the vast majority of kernel code which uses the workqueue
assumes there is a memory ordering guarantee.
Not really. Workqueues haven't even guaranteed non-reentrancy until
recently, forcing everybody to lock
Hi Tejun,
On 02/21/2014 08:06 AM, Tejun Heo wrote:
Hello,
On Fri, Feb 21, 2014 at 07:51:48AM -0500, Peter Hurley wrote:
I think the vast majority of kernel code which uses the workqueue
assumes there is a memory ordering guarantee.
Not really. Workqueues haven't even guaranteed
Yo,
On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
Ok, I can do that. But AFAIK it'll have to be an smp_rmb(); there is
no mb__after unlock.
We do have smp_mb__after_unlock_lock().
[ After thinking about it some, I don't think preventing speculative
writes before clearing
On 02/20/2014 09:13 PM, Tejun Heo wrote:
On Thu, Feb 20, 2014 at 09:07:27PM -0500, Peter Hurley wrote:
On 02/20/2014 08:59 PM, Tejun Heo wrote:
Hello,
On Thu, Feb 20, 2014 at 08:44:46PM -0500, Peter Hurley wrote:
+static void fw_device_workfn(struct work_struct *work)
+{
+ struct
On Thu, Feb 20, 2014 at 09:07:27PM -0500, Peter Hurley wrote:
> On 02/20/2014 08:59 PM, Tejun Heo wrote:
> >Hello,
> >
> >On Thu, Feb 20, 2014 at 08:44:46PM -0500, Peter Hurley wrote:
> >>>+static void fw_device_workfn(struct work_struct *work)
> >>>+{
> >>>+ struct fw_device *device =
On 02/20/2014 08:59 PM, Tejun Heo wrote:
Hello,
On Thu, Feb 20, 2014 at 08:44:46PM -0500, Peter Hurley wrote:
+static void fw_device_workfn(struct work_struct *work)
+{
+ struct fw_device *device = container_of(to_delayed_work(work),
+ struct
Hello,
On Thu, Feb 20, 2014 at 08:44:46PM -0500, Peter Hurley wrote:
> >+static void fw_device_workfn(struct work_struct *work)
> >+{
> >+struct fw_device *device = container_of(to_delayed_work(work),
> >+struct fw_device, work);
>
> I think this
On 02/20/2014 03:44 PM, Tejun Heo wrote:
PREPARE_[DELAYED_]WORK() are being phased out. They have few users
and a nasty surprise in terms of reentrancy guarantee as workqueue
considers work items to be different if they don't have the same work
function.
firewire core-device and sbp2 have been
PREPARE_[DELAYED_]WORK() are being phased out. They have few users
and a nasty surprise in terms of reentrancy guarantee as workqueue
considers work items to be different if they don't have the same work
function.
firewire core-device and sbp2 have been been multiplexing work items
with multiple
PREPARE_[DELAYED_]WORK() are being phased out. They have few users
and a nasty surprise in terms of reentrancy guarantee as workqueue
considers work items to be different if they don't have the same work
function.
firewire core-device and sbp2 have been been multiplexing work items
with multiple
On 02/20/2014 03:44 PM, Tejun Heo wrote:
PREPARE_[DELAYED_]WORK() are being phased out. They have few users
and a nasty surprise in terms of reentrancy guarantee as workqueue
considers work items to be different if they don't have the same work
function.
firewire core-device and sbp2 have been
Hello,
On Thu, Feb 20, 2014 at 08:44:46PM -0500, Peter Hurley wrote:
+static void fw_device_workfn(struct work_struct *work)
+{
+struct fw_device *device = container_of(to_delayed_work(work),
+struct fw_device, work);
I think this needs an
On 02/20/2014 08:59 PM, Tejun Heo wrote:
Hello,
On Thu, Feb 20, 2014 at 08:44:46PM -0500, Peter Hurley wrote:
+static void fw_device_workfn(struct work_struct *work)
+{
+ struct fw_device *device = container_of(to_delayed_work(work),
+ struct
On Thu, Feb 20, 2014 at 09:07:27PM -0500, Peter Hurley wrote:
On 02/20/2014 08:59 PM, Tejun Heo wrote:
Hello,
On Thu, Feb 20, 2014 at 08:44:46PM -0500, Peter Hurley wrote:
+static void fw_device_workfn(struct work_struct *work)
+{
+ struct fw_device *device =
On 02/20/2014 09:13 PM, Tejun Heo wrote:
On Thu, Feb 20, 2014 at 09:07:27PM -0500, Peter Hurley wrote:
On 02/20/2014 08:59 PM, Tejun Heo wrote:
Hello,
On Thu, Feb 20, 2014 at 08:44:46PM -0500, Peter Hurley wrote:
+static void fw_device_workfn(struct work_struct *work)
+{
+ struct
66 matches
Mail list logo