On Wed, 24 Oct 2012, Paul E. McKenney wrote:
> On Wed, Oct 24, 2012 at 04:44:14PM -0400, Mikulas Patocka wrote:
> >
> >
> > On Wed, 24 Oct 2012, Paul E. McKenney wrote:
> >
> > > On Wed, Oct 24, 2012 at 04:22:17PM -0400, Mikulas Patocka wrote:
> > > >
> > > >
> > > > On Wed, 24 Oct 2012,
On Wed, Oct 24, 2012 at 04:57:35PM -0700, Paul E. McKenney wrote:
> On Wed, Oct 24, 2012 at 04:44:14PM -0400, Mikulas Patocka wrote:
> >
> >
> > On Wed, 24 Oct 2012, Paul E. McKenney wrote:
> >
> > > On Wed, Oct 24, 2012 at 04:22:17PM -0400, Mikulas Patocka wrote:
> > > >
> > > >
> > > > On
On Wed, Oct 24, 2012 at 04:57:35PM -0700, Paul E. McKenney wrote:
On Wed, Oct 24, 2012 at 04:44:14PM -0400, Mikulas Patocka wrote:
On Wed, 24 Oct 2012, Paul E. McKenney wrote:
On Wed, Oct 24, 2012 at 04:22:17PM -0400, Mikulas Patocka wrote:
On Wed, 24 Oct 2012, Paul E.
On Wed, 24 Oct 2012, Paul E. McKenney wrote:
On Wed, Oct 24, 2012 at 04:44:14PM -0400, Mikulas Patocka wrote:
On Wed, 24 Oct 2012, Paul E. McKenney wrote:
On Wed, Oct 24, 2012 at 04:22:17PM -0400, Mikulas Patocka wrote:
On Wed, 24 Oct 2012, Paul E. McKenney wrote:
On Wed, Oct 24, 2012 at 04:44:14PM -0400, Mikulas Patocka wrote:
>
>
> On Wed, 24 Oct 2012, Paul E. McKenney wrote:
>
> > On Wed, Oct 24, 2012 at 04:22:17PM -0400, Mikulas Patocka wrote:
> > >
> > >
> > > On Wed, 24 Oct 2012, Paul E. McKenney wrote:
> > >
> > > > On Tue, Oct 23, 2012 at
On Wed, 24 Oct 2012, Paul E. McKenney wrote:
> On Wed, Oct 24, 2012 at 04:22:17PM -0400, Mikulas Patocka wrote:
> >
> >
> > On Wed, 24 Oct 2012, Paul E. McKenney wrote:
> >
> > > On Tue, Oct 23, 2012 at 05:39:43PM -0400, Mikulas Patocka wrote:
> > > >
> > > >
> > > > On Tue, 23 Oct 2012,
On Wed, Oct 24, 2012 at 04:22:17PM -0400, Mikulas Patocka wrote:
>
>
> On Wed, 24 Oct 2012, Paul E. McKenney wrote:
>
> > On Tue, Oct 23, 2012 at 05:39:43PM -0400, Mikulas Patocka wrote:
> > >
> > >
> > > On Tue, 23 Oct 2012, Paul E. McKenney wrote:
> > >
> > > > On Tue, Oct 23, 2012 at
On Wed, 24 Oct 2012, Paul E. McKenney wrote:
> On Tue, Oct 23, 2012 at 05:39:43PM -0400, Mikulas Patocka wrote:
> >
> >
> > On Tue, 23 Oct 2012, Paul E. McKenney wrote:
> >
> > > On Tue, Oct 23, 2012 at 01:29:02PM -0700, Paul E. McKenney wrote:
> > > > On Tue, Oct 23, 2012 at 08:41:23PM
On Tue, Oct 23, 2012 at 05:39:43PM -0400, Mikulas Patocka wrote:
>
>
> On Tue, 23 Oct 2012, Paul E. McKenney wrote:
>
> > On Tue, Oct 23, 2012 at 01:29:02PM -0700, Paul E. McKenney wrote:
> > > On Tue, Oct 23, 2012 at 08:41:23PM +0200, Oleg Nesterov wrote:
> > > > On 10/23, Paul E. McKenney
On 10/23, Peter Zijlstra wrote:
>
> On Tue, 2012-10-23 at 21:23 +0200, Oleg Nesterov wrote:
> >
> > static void mb_ipi(void *arg)
> > {
> > smp_mb(); /* unneeded ? */
> > }
> >
> > static void force_mb_on_each_cpu(void)
> > {
> >
On 10/23, Peter Zijlstra wrote:
On Tue, 2012-10-23 at 21:23 +0200, Oleg Nesterov wrote:
static void mb_ipi(void *arg)
{
smp_mb(); /* unneeded ? */
}
static void force_mb_on_each_cpu(void)
{
smp_mb();
On Tue, Oct 23, 2012 at 05:39:43PM -0400, Mikulas Patocka wrote:
On Tue, 23 Oct 2012, Paul E. McKenney wrote:
On Tue, Oct 23, 2012 at 01:29:02PM -0700, Paul E. McKenney wrote:
On Tue, Oct 23, 2012 at 08:41:23PM +0200, Oleg Nesterov wrote:
On 10/23, Paul E. McKenney wrote:
On Wed, 24 Oct 2012, Paul E. McKenney wrote:
On Tue, Oct 23, 2012 at 05:39:43PM -0400, Mikulas Patocka wrote:
On Tue, 23 Oct 2012, Paul E. McKenney wrote:
On Tue, Oct 23, 2012 at 01:29:02PM -0700, Paul E. McKenney wrote:
On Tue, Oct 23, 2012 at 08:41:23PM +0200, Oleg Nesterov
On Wed, Oct 24, 2012 at 04:22:17PM -0400, Mikulas Patocka wrote:
On Wed, 24 Oct 2012, Paul E. McKenney wrote:
On Tue, Oct 23, 2012 at 05:39:43PM -0400, Mikulas Patocka wrote:
On Tue, 23 Oct 2012, Paul E. McKenney wrote:
On Tue, Oct 23, 2012 at 01:29:02PM -0700, Paul E.
On Wed, 24 Oct 2012, Paul E. McKenney wrote:
On Wed, Oct 24, 2012 at 04:22:17PM -0400, Mikulas Patocka wrote:
On Wed, 24 Oct 2012, Paul E. McKenney wrote:
On Tue, Oct 23, 2012 at 05:39:43PM -0400, Mikulas Patocka wrote:
On Tue, 23 Oct 2012, Paul E. McKenney wrote:
On Wed, Oct 24, 2012 at 04:44:14PM -0400, Mikulas Patocka wrote:
On Wed, 24 Oct 2012, Paul E. McKenney wrote:
On Wed, Oct 24, 2012 at 04:22:17PM -0400, Mikulas Patocka wrote:
On Wed, 24 Oct 2012, Paul E. McKenney wrote:
On Tue, Oct 23, 2012 at 05:39:43PM -0400, Mikulas
On Tue, 23 Oct 2012, Paul E. McKenney wrote:
> On Tue, Oct 23, 2012 at 01:29:02PM -0700, Paul E. McKenney wrote:
> > On Tue, Oct 23, 2012 at 08:41:23PM +0200, Oleg Nesterov wrote:
> > > On 10/23, Paul E. McKenney wrote:
> > > >
> > > > * Note that this guarantee implies a further
On Tue, 23 Oct 2012, Oleg Nesterov wrote:
> On 10/23, Oleg Nesterov wrote:
> >
> > Not really the comment, but the question...
>
> Damn. And another question.
>
> Mikulas, I am sorry for this (almost) off-topic noise. Let me repeat
> just in case that I am not arguing with your patches.
>
>
On Tue, 2012-10-23 at 21:23 +0200, Oleg Nesterov wrote:
>
> static void mb_ipi(void *arg)
> {
> smp_mb(); /* unneeded ? */
> }
>
> static void force_mb_on_each_cpu(void)
> {
> smp_mb();
>
On Tue, 2012-10-23 at 21:23 +0200, Oleg Nesterov wrote:
> I have to admit, I have
> no idea how much cli/sti is slower compared to preempt_disable/enable.
>
A lot.. esp on stupid hardware (insert pentium-4 reference), but I think
its more expensive for pretty much all hardware, preempt_disable()
On Tue, Oct 23, 2012 at 08:41:23PM +0200, Oleg Nesterov wrote:
> On 10/23, Paul E. McKenney wrote:
> >
> > * Note that this guarantee implies a further memory-ordering guarantee.
> > * On systems with more than one CPU, when synchronize_sched() returns,
> > * each CPU is guaranteed to have
On Mon, 2012-10-22 at 19:37 -0400, Mikulas Patocka wrote:
> - /*
> -* On X86, write operation in this_cpu_dec serves as a memory unlock
> -* barrier (i.e. memory accesses may be moved before the write, but
> -* no memory accesses are moved past the write).
> -
On Tue, Oct 23, 2012 at 01:29:02PM -0700, Paul E. McKenney wrote:
> On Tue, Oct 23, 2012 at 08:41:23PM +0200, Oleg Nesterov wrote:
> > On 10/23, Paul E. McKenney wrote:
> > >
> > > * Note that this guarantee implies a further memory-ordering guarantee.
> > > * On systems with more than one CPU,
On 10/23, Oleg Nesterov wrote:
>
> Not really the comment, but the question...
Damn. And another question.
Mikulas, I am sorry for this (almost) off-topic noise. Let me repeat
just in case that I am not arguing with your patches.
So write_lock/write_unlock needs to call synchronize_sched() 3
On 10/23, Paul E. McKenney wrote:
>
> * Note that this guarantee implies a further memory-ordering guarantee.
> * On systems with more than one CPU, when synchronize_sched() returns,
> * each CPU is guaranteed to have executed a full memory barrier since
> * the end of its last RCU read-side
On 10/23, Paul E. McKenney wrote:
>
> On Tue, Oct 23, 2012 at 06:59:12PM +0200, Oleg Nesterov wrote:
> > Not really the comment, but the question...
> >
> > On 10/22, Mikulas Patocka wrote:
> > >
> > > static inline void percpu_down_read(struct percpu_rw_semaphore *p)
> > > {
> > >
On Tue, Oct 23, 2012 at 06:59:12PM +0200, Oleg Nesterov wrote:
> Not really the comment, but the question...
>
> On 10/22, Mikulas Patocka wrote:
> >
> > static inline void percpu_down_read(struct percpu_rw_semaphore *p)
> > {
> > rcu_read_lock();
> > @@ -24,22 +27,12 @@ static inline void
Not really the comment, but the question...
On 10/22, Mikulas Patocka wrote:
>
> static inline void percpu_down_read(struct percpu_rw_semaphore *p)
> {
> rcu_read_lock();
> @@ -24,22 +27,12 @@ static inline void percpu_down_read(stru
> }
> this_cpu_inc(*p->counters);
>
Not really the comment, but the question...
On 10/22, Mikulas Patocka wrote:
static inline void percpu_down_read(struct percpu_rw_semaphore *p)
{
rcu_read_lock();
@@ -24,22 +27,12 @@ static inline void percpu_down_read(stru
}
this_cpu_inc(*p-counters);
On Tue, Oct 23, 2012 at 06:59:12PM +0200, Oleg Nesterov wrote:
Not really the comment, but the question...
On 10/22, Mikulas Patocka wrote:
static inline void percpu_down_read(struct percpu_rw_semaphore *p)
{
rcu_read_lock();
@@ -24,22 +27,12 @@ static inline void
On 10/23, Paul E. McKenney wrote:
On Tue, Oct 23, 2012 at 06:59:12PM +0200, Oleg Nesterov wrote:
Not really the comment, but the question...
On 10/22, Mikulas Patocka wrote:
static inline void percpu_down_read(struct percpu_rw_semaphore *p)
{
rcu_read_lock();
@@ -24,22
On 10/23, Paul E. McKenney wrote:
* Note that this guarantee implies a further memory-ordering guarantee.
* On systems with more than one CPU, when synchronize_sched() returns,
* each CPU is guaranteed to have executed a full memory barrier since
* the end of its last RCU read-side
On 10/23, Oleg Nesterov wrote:
Not really the comment, but the question...
Damn. And another question.
Mikulas, I am sorry for this (almost) off-topic noise. Let me repeat
just in case that I am not arguing with your patches.
So write_lock/write_unlock needs to call synchronize_sched() 3
On Tue, Oct 23, 2012 at 01:29:02PM -0700, Paul E. McKenney wrote:
On Tue, Oct 23, 2012 at 08:41:23PM +0200, Oleg Nesterov wrote:
On 10/23, Paul E. McKenney wrote:
* Note that this guarantee implies a further memory-ordering guarantee.
* On systems with more than one CPU, when
On Mon, 2012-10-22 at 19:37 -0400, Mikulas Patocka wrote:
- /*
-* On X86, write operation in this_cpu_dec serves as a memory unlock
-* barrier (i.e. memory accesses may be moved before the write, but
-* no memory accesses are moved past the write).
-* On
On Tue, Oct 23, 2012 at 08:41:23PM +0200, Oleg Nesterov wrote:
On 10/23, Paul E. McKenney wrote:
* Note that this guarantee implies a further memory-ordering guarantee.
* On systems with more than one CPU, when synchronize_sched() returns,
* each CPU is guaranteed to have executed a
On Tue, 2012-10-23 at 21:23 +0200, Oleg Nesterov wrote:
I have to admit, I have
no idea how much cli/sti is slower compared to preempt_disable/enable.
A lot.. esp on stupid hardware (insert pentium-4 reference), but I think
its more expensive for pretty much all hardware, preempt_disable() is
On Tue, 2012-10-23 at 21:23 +0200, Oleg Nesterov wrote:
static void mb_ipi(void *arg)
{
smp_mb(); /* unneeded ? */
}
static void force_mb_on_each_cpu(void)
{
smp_mb();
smp_call_function(mb_ipi,
On Tue, 23 Oct 2012, Oleg Nesterov wrote:
On 10/23, Oleg Nesterov wrote:
Not really the comment, but the question...
Damn. And another question.
Mikulas, I am sorry for this (almost) off-topic noise. Let me repeat
just in case that I am not arguing with your patches.
So
On Tue, 23 Oct 2012, Paul E. McKenney wrote:
On Tue, Oct 23, 2012 at 01:29:02PM -0700, Paul E. McKenney wrote:
On Tue, Oct 23, 2012 at 08:41:23PM +0200, Oleg Nesterov wrote:
On 10/23, Paul E. McKenney wrote:
* Note that this guarantee implies a further memory-ordering guarantee.
This patch introduces new barrier pair light_mb() and heavy_mb() for
percpu rw semaphores.
This patch fixes a bug in percpu-rw-semaphores where a barrier was
missing in percpu_up_write.
This patch improves performance on the read path of
percpu-rw-semaphores: on non-x86 cpus, there was a
This patch introduces new barrier pair light_mb() and heavy_mb() for
percpu rw semaphores.
This patch fixes a bug in percpu-rw-semaphores where a barrier was
missing in percpu_up_write.
This patch improves performance on the read path of
percpu-rw-semaphores: on non-x86 cpus, there was a
42 matches
Mail list logo