Re: [dm-devel] [PATCH 2/3] Introduce percpu rw semaphores

2012-08-01 Thread Mikulas Patocka


On Sun, 29 Jul 2012, Eric Dumazet wrote:

> On Sun, 2012-07-29 at 12:10 +0200, Eric Dumazet wrote:
> 
> > You can probably design something needing no more than 4 bytes per cpu,
> > and this thing could use non locked operations as bonus.
> > 
> > like the following ...
> 
> Coming back from my bike ride, here is a more polished version with
> proper synchronization/ barriers.
> 
> struct percpu_rw_semaphore {
>   /* percpu_sem_down_read() use the following in fast path */
>   unsigned int __percpu *active_counters;
> 
>   unsigned int __percpu *counters;
>   struct rw_semaphore sem; /* used in slow path and by writers */
> };
> 
> static inline int percpu_sem_init(struct percpu_rw_semaphore *p)
> {
>   p->counters = alloc_percpu(unsigned int);
>   if (!p->counters)
>   return -ENOMEM;
>   init_rwsem(>sem);
>   rcu_assign_pointer(p->active_counters, p->counters);
>   return 0;
> }
> 
> 
> static inline bool percpu_sem_down_read(struct percpu_rw_semaphore *p)
> {
>   unsigned int __percpu *counters;
> 
>   rcu_read_lock();
>   counters = rcu_dereference(p->active_counters);
>   if (counters) {
>   this_cpu_inc(*counters);
>   smp_wmb(); /* paired with smp_rmb() in percpu_count() */

Why is this barrier needed? RCU works as a barrier doesn't it?
RCU is unlocked when the cpu passes a quiescent state, and I suppose that 
entering the quiescent state works as a barrier. Or doesn't it?

>   rcu_read_unlock();
>   return true;
>   }
>   rcu_read_unlock();
>   down_read(>sem);
>   return false;
> }
> 
> static inline void percpu_sem_up_read(struct percpu_rw_semaphore *p, bool 
> fastpath)
> {
>   if (fastpath)
>   this_cpu_dec(*p->counters);
>   else
>   up_read(>sem);
> }
> 
> static inline unsigned int percpu_count(unsigned int __percpu *counters)
> {
>   unsigned int total = 0;
>   int cpu;
> 
>   for_each_possible_cpu(cpu)
>   total += *per_cpu_ptr(counters, cpu);
> 
>   return total;
> }
> 
> static inline void percpu_sem_down_write(struct percpu_rw_semaphore *p)
> {
>   down_write(>sem);
>   p->active_counters = NULL;
>   synchronize_rcu();
>   smp_rmb(); /* paired with smp_wmb() in percpu_sem_down_read() */

Why barrier here? Synchronize_rcu() doesn't work as a barrier?

Mikulas

>   while (percpu_count(p->counters))
>   schedule();
> }
> 
> static inline void percpu_sem_up_write(struct percpu_rw_semaphore *p)
> {
>   rcu_assign_pointer(p->active_counters, p->counters);
>   up_write(>sem);
> }
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [dm-devel] [PATCH 2/3] Introduce percpu rw semaphores

2012-08-01 Thread Paul E. McKenney
On Mon, Jul 30, 2012 at 08:00:19PM -0400, Mikulas Patocka wrote:
> 
> 
> On Mon, 30 Jul 2012, Paul E. McKenney wrote:
> 
> > On Sun, Jul 29, 2012 at 01:13:34AM -0400, Mikulas Patocka wrote:
> > > On Sat, 28 Jul 2012, Eric Dumazet wrote:
> > > > On Sat, 2012-07-28 at 12:41 -0400, Mikulas Patocka wrote:
> > 
> > [ . . . ]
> > 
> > > > (bdev->bd_block_size should be read exactly once )
> > > 
> > > Rewrite all direct and non-direct io code so that it reads block size 
> > > just 
> > > once ...
> > 
> > For whatever it is worth, the 3.5 Linux kernel only has about ten mentions
> > of bd_block_size, at least according to cscope.
> 
> plus 213 uses of i_blkbits (which is derived directly from bd_block_size). 
> 45 of them is in fs/*.c and mm/*.c

At least it is only hundreds rather than thousands!  ;-)

Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [dm-devel] [PATCH 2/3] Introduce percpu rw semaphores

2012-08-01 Thread Paul E. McKenney
On Mon, Jul 30, 2012 at 08:00:19PM -0400, Mikulas Patocka wrote:
 
 
 On Mon, 30 Jul 2012, Paul E. McKenney wrote:
 
  On Sun, Jul 29, 2012 at 01:13:34AM -0400, Mikulas Patocka wrote:
   On Sat, 28 Jul 2012, Eric Dumazet wrote:
On Sat, 2012-07-28 at 12:41 -0400, Mikulas Patocka wrote:
  
  [ . . . ]
  
(bdev-bd_block_size should be read exactly once )
   
   Rewrite all direct and non-direct io code so that it reads block size 
   just 
   once ...
  
  For whatever it is worth, the 3.5 Linux kernel only has about ten mentions
  of bd_block_size, at least according to cscope.
 
 plus 213 uses of i_blkbits (which is derived directly from bd_block_size). 
 45 of them is in fs/*.c and mm/*.c

At least it is only hundreds rather than thousands!  ;-)

Thanx, Paul

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [dm-devel] [PATCH 2/3] Introduce percpu rw semaphores

2012-08-01 Thread Mikulas Patocka


On Sun, 29 Jul 2012, Eric Dumazet wrote:

 On Sun, 2012-07-29 at 12:10 +0200, Eric Dumazet wrote:
 
  You can probably design something needing no more than 4 bytes per cpu,
  and this thing could use non locked operations as bonus.
  
  like the following ...
 
 Coming back from my bike ride, here is a more polished version with
 proper synchronization/ barriers.
 
 struct percpu_rw_semaphore {
   /* percpu_sem_down_read() use the following in fast path */
   unsigned int __percpu *active_counters;
 
   unsigned int __percpu *counters;
   struct rw_semaphore sem; /* used in slow path and by writers */
 };
 
 static inline int percpu_sem_init(struct percpu_rw_semaphore *p)
 {
   p-counters = alloc_percpu(unsigned int);
   if (!p-counters)
   return -ENOMEM;
   init_rwsem(p-sem);
   rcu_assign_pointer(p-active_counters, p-counters);
   return 0;
 }
 
 
 static inline bool percpu_sem_down_read(struct percpu_rw_semaphore *p)
 {
   unsigned int __percpu *counters;
 
   rcu_read_lock();
   counters = rcu_dereference(p-active_counters);
   if (counters) {
   this_cpu_inc(*counters);
   smp_wmb(); /* paired with smp_rmb() in percpu_count() */

Why is this barrier needed? RCU works as a barrier doesn't it?
RCU is unlocked when the cpu passes a quiescent state, and I suppose that 
entering the quiescent state works as a barrier. Or doesn't it?

   rcu_read_unlock();
   return true;
   }
   rcu_read_unlock();
   down_read(p-sem);
   return false;
 }
 
 static inline void percpu_sem_up_read(struct percpu_rw_semaphore *p, bool 
 fastpath)
 {
   if (fastpath)
   this_cpu_dec(*p-counters);
   else
   up_read(p-sem);
 }
 
 static inline unsigned int percpu_count(unsigned int __percpu *counters)
 {
   unsigned int total = 0;
   int cpu;
 
   for_each_possible_cpu(cpu)
   total += *per_cpu_ptr(counters, cpu);
 
   return total;
 }
 
 static inline void percpu_sem_down_write(struct percpu_rw_semaphore *p)
 {
   down_write(p-sem);
   p-active_counters = NULL;
   synchronize_rcu();
   smp_rmb(); /* paired with smp_wmb() in percpu_sem_down_read() */

Why barrier here? Synchronize_rcu() doesn't work as a barrier?

Mikulas

   while (percpu_count(p-counters))
   schedule();
 }
 
 static inline void percpu_sem_up_write(struct percpu_rw_semaphore *p)
 {
   rcu_assign_pointer(p-active_counters, p-counters);
   up_write(p-sem);
 }
 
 
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [dm-devel] [PATCH 2/3] Introduce percpu rw semaphores

2012-07-30 Thread Mikulas Patocka


On Mon, 30 Jul 2012, Paul E. McKenney wrote:

> On Sun, Jul 29, 2012 at 01:13:34AM -0400, Mikulas Patocka wrote:
> > On Sat, 28 Jul 2012, Eric Dumazet wrote:
> > > On Sat, 2012-07-28 at 12:41 -0400, Mikulas Patocka wrote:
> 
> [ . . . ]
> 
> > > (bdev->bd_block_size should be read exactly once )
> > 
> > Rewrite all direct and non-direct io code so that it reads block size just 
> > once ...
> 
> For whatever it is worth, the 3.5 Linux kernel only has about ten mentions
> of bd_block_size, at least according to cscope.
> 
>   Thanx, Paul

plus 213 uses of i_blkbits (which is derived directly from bd_block_size). 
45 of them is in fs/*.c and mm/*.c

Mikulas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [dm-devel] [PATCH 2/3] Introduce percpu rw semaphores

2012-07-30 Thread Paul E. McKenney
On Sun, Jul 29, 2012 at 01:13:34AM -0400, Mikulas Patocka wrote:
> On Sat, 28 Jul 2012, Eric Dumazet wrote:
> > On Sat, 2012-07-28 at 12:41 -0400, Mikulas Patocka wrote:

[ . . . ]

> > (bdev->bd_block_size should be read exactly once )
> 
> Rewrite all direct and non-direct io code so that it reads block size just 
> once ...

For whatever it is worth, the 3.5 Linux kernel only has about ten mentions
of bd_block_size, at least according to cscope.

Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [dm-devel] [PATCH 2/3] Introduce percpu rw semaphores

2012-07-30 Thread Paul E. McKenney
On Sun, Jul 29, 2012 at 01:13:34AM -0400, Mikulas Patocka wrote:
 On Sat, 28 Jul 2012, Eric Dumazet wrote:
  On Sat, 2012-07-28 at 12:41 -0400, Mikulas Patocka wrote:

[ . . . ]

  (bdev-bd_block_size should be read exactly once )
 
 Rewrite all direct and non-direct io code so that it reads block size just 
 once ...

For whatever it is worth, the 3.5 Linux kernel only has about ten mentions
of bd_block_size, at least according to cscope.

Thanx, Paul

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [dm-devel] [PATCH 2/3] Introduce percpu rw semaphores

2012-07-30 Thread Mikulas Patocka


On Mon, 30 Jul 2012, Paul E. McKenney wrote:

 On Sun, Jul 29, 2012 at 01:13:34AM -0400, Mikulas Patocka wrote:
  On Sat, 28 Jul 2012, Eric Dumazet wrote:
   On Sat, 2012-07-28 at 12:41 -0400, Mikulas Patocka wrote:
 
 [ . . . ]
 
   (bdev-bd_block_size should be read exactly once )
  
  Rewrite all direct and non-direct io code so that it reads block size just 
  once ...
 
 For whatever it is worth, the 3.5 Linux kernel only has about ten mentions
 of bd_block_size, at least according to cscope.
 
   Thanx, Paul

plus 213 uses of i_blkbits (which is derived directly from bd_block_size). 
45 of them is in fs/*.c and mm/*.c

Mikulas
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [dm-devel] [PATCH 2/3] Introduce percpu rw semaphores

2012-07-29 Thread Eric Dumazet
On Sun, 2012-07-29 at 12:10 +0200, Eric Dumazet wrote:

> You can probably design something needing no more than 4 bytes per cpu,
> and this thing could use non locked operations as bonus.
> 
> like the following ...

Coming back from my bike ride, here is a more polished version with
proper synchronization/ barriers.

struct percpu_rw_semaphore {
/* percpu_sem_down_read() use the following in fast path */
unsigned int __percpu *active_counters;

unsigned int __percpu *counters;
struct rw_semaphore sem; /* used in slow path and by writers */
};

static inline int percpu_sem_init(struct percpu_rw_semaphore *p)
{
p->counters = alloc_percpu(unsigned int);
if (!p->counters)
return -ENOMEM;
init_rwsem(>sem);
rcu_assign_pointer(p->active_counters, p->counters);
return 0;
}


static inline bool percpu_sem_down_read(struct percpu_rw_semaphore *p)
{
unsigned int __percpu *counters;

rcu_read_lock();
counters = rcu_dereference(p->active_counters);
if (counters) {
this_cpu_inc(*counters);
smp_wmb(); /* paired with smp_rmb() in percpu_count() */
rcu_read_unlock();
return true;
}
rcu_read_unlock();
down_read(>sem);
return false;
}

static inline void percpu_sem_up_read(struct percpu_rw_semaphore *p, bool 
fastpath)
{
if (fastpath)
this_cpu_dec(*p->counters);
else
up_read(>sem);
}

static inline unsigned int percpu_count(unsigned int __percpu *counters)
{
unsigned int total = 0;
int cpu;

for_each_possible_cpu(cpu)
total += *per_cpu_ptr(counters, cpu);

return total;
}

static inline void percpu_sem_down_write(struct percpu_rw_semaphore *p)
{
down_write(>sem);
p->active_counters = NULL;
synchronize_rcu();
smp_rmb(); /* paired with smp_wmb() in percpu_sem_down_read() */

while (percpu_count(p->counters))
schedule();
}

static inline void percpu_sem_up_write(struct percpu_rw_semaphore *p)
{
rcu_assign_pointer(p->active_counters, p->counters);
up_write(>sem);
}


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [dm-devel] [PATCH 2/3] Introduce percpu rw semaphores

2012-07-29 Thread Eric Dumazet
On Sun, 2012-07-29 at 01:13 -0400, Mikulas Patocka wrote:

> Each cpu should have its own rw semaphore in its cache, so I don't see a 
> problem there.
> 
> When you change block size, all 4096 rw semaphores are locked for write, 
> but changing block size is not a performance sensitive operation.
> 
> > Really you shouldnt use rwlock in a path if this might hurt performance.
> > 
> > RCU is probably a better answer.
> 
> RCU is meaningless here. RCU allows lockless dereference of a pointer. 
> Here the problem is not pointer dereference, the problem is that integer 
> bd_block_size may change.

So add a pointer if you need to. Thats the point.

> 
> > (bdev->bd_block_size should be read exactly once )
> 
> Rewrite all direct and non-direct io code so that it reads block size just 
> once ...


You introduced percpu rw semaphores, thats only incentive for people to
use that infrastructure elsewhere.

And its a big hammer :

sizeof(struct rw_semaphore)=0x70 

You can probably design something needing no more than 4 bytes per cpu,
and this thing could use non locked operations as bonus.

like the following ...

struct percpu_rw_semaphore {
/* percpu_sem_down_read() use the following in fast path */
unsigned int __percpu *active_counters;

unsigned int __percpu *counters;
struct rw_semaphore sem; /* used in slow path and by writers */
};

static inline int percpu_sem_init(struct percpu_rw_semaphore *p)
{
p->counters = alloc_percpu(unsigned int);
if (!p->counters)
return -ENOMEM;
init_rwsem(>sem);
p->active_counters = p->counters;
return 0;
}


static inline bool percpu_sem_down_read(struct percpu_rw_semaphore *p)
{
unsigned int __percpu *counters = ACCESS_ONCE(p->active_counters);

if (counters) {
this_cpu_inc(*counters);
return true;
}
down_read(>sem);
return false;
}

static inline void percpu_sem_up_read(struct percpu_rw_semaphore *p, bool 
fastpath)
{
if (fastpath)
this_cpu_dec(*p->counters);
else
up_read(>sem);
}

static inline unsigned int percpu_count(unsigned int *counters)
{
unsigned int total = 0;
int cpu;

for_each_possible_cpu(cpu)
total += *per_cpu_ptr(counters, cpu);

return total;
}

static inline void percpu_sem_down_write(struct percpu_rw_semaphore *p)
{
down_write(>sem);
p->active_counters = NULL;

while (percpu_count(p->counters))
schedule();
}

static inline void percpu_sem_up_write(struct percpu_rw_semaphore *p)
{
p->active_counters = p->counters;
up_write(>sem);
}




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [dm-devel] [PATCH 2/3] Introduce percpu rw semaphores

2012-07-29 Thread Eric Dumazet
On Sun, 2012-07-29 at 01:13 -0400, Mikulas Patocka wrote:

 Each cpu should have its own rw semaphore in its cache, so I don't see a 
 problem there.
 
 When you change block size, all 4096 rw semaphores are locked for write, 
 but changing block size is not a performance sensitive operation.
 
  Really you shouldnt use rwlock in a path if this might hurt performance.
  
  RCU is probably a better answer.
 
 RCU is meaningless here. RCU allows lockless dereference of a pointer. 
 Here the problem is not pointer dereference, the problem is that integer 
 bd_block_size may change.

So add a pointer if you need to. Thats the point.

 
  (bdev-bd_block_size should be read exactly once )
 
 Rewrite all direct and non-direct io code so that it reads block size just 
 once ...


You introduced percpu rw semaphores, thats only incentive for people to
use that infrastructure elsewhere.

And its a big hammer :

sizeof(struct rw_semaphore)=0x70 

You can probably design something needing no more than 4 bytes per cpu,
and this thing could use non locked operations as bonus.

like the following ...

struct percpu_rw_semaphore {
/* percpu_sem_down_read() use the following in fast path */
unsigned int __percpu *active_counters;

unsigned int __percpu *counters;
struct rw_semaphore sem; /* used in slow path and by writers */
};

static inline int percpu_sem_init(struct percpu_rw_semaphore *p)
{
p-counters = alloc_percpu(unsigned int);
if (!p-counters)
return -ENOMEM;
init_rwsem(p-sem);
p-active_counters = p-counters;
return 0;
}


static inline bool percpu_sem_down_read(struct percpu_rw_semaphore *p)
{
unsigned int __percpu *counters = ACCESS_ONCE(p-active_counters);

if (counters) {
this_cpu_inc(*counters);
return true;
}
down_read(p-sem);
return false;
}

static inline void percpu_sem_up_read(struct percpu_rw_semaphore *p, bool 
fastpath)
{
if (fastpath)
this_cpu_dec(*p-counters);
else
up_read(p-sem);
}

static inline unsigned int percpu_count(unsigned int *counters)
{
unsigned int total = 0;
int cpu;

for_each_possible_cpu(cpu)
total += *per_cpu_ptr(counters, cpu);

return total;
}

static inline void percpu_sem_down_write(struct percpu_rw_semaphore *p)
{
down_write(p-sem);
p-active_counters = NULL;

while (percpu_count(p-counters))
schedule();
}

static inline void percpu_sem_up_write(struct percpu_rw_semaphore *p)
{
p-active_counters = p-counters;
up_write(p-sem);
}




--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [dm-devel] [PATCH 2/3] Introduce percpu rw semaphores

2012-07-29 Thread Eric Dumazet
On Sun, 2012-07-29 at 12:10 +0200, Eric Dumazet wrote:

 You can probably design something needing no more than 4 bytes per cpu,
 and this thing could use non locked operations as bonus.
 
 like the following ...

Coming back from my bike ride, here is a more polished version with
proper synchronization/ barriers.

struct percpu_rw_semaphore {
/* percpu_sem_down_read() use the following in fast path */
unsigned int __percpu *active_counters;

unsigned int __percpu *counters;
struct rw_semaphore sem; /* used in slow path and by writers */
};

static inline int percpu_sem_init(struct percpu_rw_semaphore *p)
{
p-counters = alloc_percpu(unsigned int);
if (!p-counters)
return -ENOMEM;
init_rwsem(p-sem);
rcu_assign_pointer(p-active_counters, p-counters);
return 0;
}


static inline bool percpu_sem_down_read(struct percpu_rw_semaphore *p)
{
unsigned int __percpu *counters;

rcu_read_lock();
counters = rcu_dereference(p-active_counters);
if (counters) {
this_cpu_inc(*counters);
smp_wmb(); /* paired with smp_rmb() in percpu_count() */
rcu_read_unlock();
return true;
}
rcu_read_unlock();
down_read(p-sem);
return false;
}

static inline void percpu_sem_up_read(struct percpu_rw_semaphore *p, bool 
fastpath)
{
if (fastpath)
this_cpu_dec(*p-counters);
else
up_read(p-sem);
}

static inline unsigned int percpu_count(unsigned int __percpu *counters)
{
unsigned int total = 0;
int cpu;

for_each_possible_cpu(cpu)
total += *per_cpu_ptr(counters, cpu);

return total;
}

static inline void percpu_sem_down_write(struct percpu_rw_semaphore *p)
{
down_write(p-sem);
p-active_counters = NULL;
synchronize_rcu();
smp_rmb(); /* paired with smp_wmb() in percpu_sem_down_read() */

while (percpu_count(p-counters))
schedule();
}

static inline void percpu_sem_up_write(struct percpu_rw_semaphore *p)
{
rcu_assign_pointer(p-active_counters, p-counters);
up_write(p-sem);
}


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [dm-devel] [PATCH 2/3] Introduce percpu rw semaphores

2012-07-28 Thread Mikulas Patocka


On Sat, 28 Jul 2012, Eric Dumazet wrote:

> On Sat, 2012-07-28 at 12:41 -0400, Mikulas Patocka wrote:
> > Introduce percpu rw semaphores
> > 
> > When many CPUs are locking a rw semaphore for read concurrently, cache
> > line bouncing occurs. When a CPU acquires rw semaphore for read, the
> > CPU writes to the cache line holding the semaphore. Consequently, the
> > cache line is being moved between CPUs and this slows down semaphore
> > acquisition.
> > 
> > This patch introduces new percpu rw semaphores. They are functionally
> > identical to existing rw semaphores, but locking the percpu rw semaphore
> > for read is faster and locking for write is slower.
> > 
> > The percpu rw semaphore is implemented as a percpu array of rw
> > semaphores, each semaphore for one CPU. When some thread needs to lock
> > the semaphore for read, only semaphore on the current CPU is locked for
> > read. When some thread needs to lock the semaphore for write, semaphores
> > for all CPUs are locked for write. This avoids cache line bouncing.
> > 
> > Note that the thread that is locking percpu rw semaphore may be
> > rescheduled, it doesn't cause bug, but cache line bouncing occurs in
> > this case.
> > 
> > Signed-off-by: Mikulas Patocka 
> 
> I am curious to see how this performs with 4096 cpus ?

Each cpu should have its own rw semaphore in its cache, so I don't see a 
problem there.

When you change block size, all 4096 rw semaphores are locked for write, 
but changing block size is not a performance sensitive operation.

> Really you shouldnt use rwlock in a path if this might hurt performance.
> 
> RCU is probably a better answer.

RCU is meaningless here. RCU allows lockless dereference of a pointer. 
Here the problem is not pointer dereference, the problem is that integer 
bd_block_size may change.

> (bdev->bd_block_size should be read exactly once )

Rewrite all direct and non-direct io code so that it reads block size just 
once ...

Mikulas
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/3] Introduce percpu rw semaphores

2012-07-28 Thread Eric Dumazet
On Sat, 2012-07-28 at 12:41 -0400, Mikulas Patocka wrote:
> Introduce percpu rw semaphores
> 
> When many CPUs are locking a rw semaphore for read concurrently, cache
> line bouncing occurs. When a CPU acquires rw semaphore for read, the
> CPU writes to the cache line holding the semaphore. Consequently, the
> cache line is being moved between CPUs and this slows down semaphore
> acquisition.
> 
> This patch introduces new percpu rw semaphores. They are functionally
> identical to existing rw semaphores, but locking the percpu rw semaphore
> for read is faster and locking for write is slower.
> 
> The percpu rw semaphore is implemented as a percpu array of rw
> semaphores, each semaphore for one CPU. When some thread needs to lock
> the semaphore for read, only semaphore on the current CPU is locked for
> read. When some thread needs to lock the semaphore for write, semaphores
> for all CPUs are locked for write. This avoids cache line bouncing.
> 
> Note that the thread that is locking percpu rw semaphore may be
> rescheduled, it doesn't cause bug, but cache line bouncing occurs in
> this case.
> 
> Signed-off-by: Mikulas Patocka 

I am curious to see how this performs with 4096 cpus ?

Really you shouldnt use rwlock in a path if this might hurt performance.

RCU is probably a better answer.

(bdev->bd_block_size should be read exactly once )



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/3] Introduce percpu rw semaphores

2012-07-28 Thread Mikulas Patocka
Introduce percpu rw semaphores

When many CPUs are locking a rw semaphore for read concurrently, cache
line bouncing occurs. When a CPU acquires rw semaphore for read, the
CPU writes to the cache line holding the semaphore. Consequently, the
cache line is being moved between CPUs and this slows down semaphore
acquisition.

This patch introduces new percpu rw semaphores. They are functionally
identical to existing rw semaphores, but locking the percpu rw semaphore
for read is faster and locking for write is slower.

The percpu rw semaphore is implemented as a percpu array of rw
semaphores, each semaphore for one CPU. When some thread needs to lock
the semaphore for read, only semaphore on the current CPU is locked for
read. When some thread needs to lock the semaphore for write, semaphores
for all CPUs are locked for write. This avoids cache line bouncing.

Note that the thread that is locking percpu rw semaphore may be
rescheduled, it doesn't cause bug, but cache line bouncing occurs in
this case.

Signed-off-by: Mikulas Patocka 

---
 include/linux/percpu-rwsem.h |   77 +++
 1 file changed, 77 insertions(+)

Index: linux-3.5-fast/include/linux/percpu-rwsem.h
===
--- /dev/null   1970-01-01 00:00:00.0 +
+++ linux-3.5-fast/include/linux/percpu-rwsem.h 2012-07-28 18:41:05.0 
+0200
@@ -0,0 +1,77 @@
+#ifndef _LINUX_PERCPU_RWSEM_H
+#define _LINUX_PERCPU_RWSEM_H
+
+#include 
+#include 
+
+#ifndef CONFIG_SMP
+
+#define percpu_rw_semaphorerw_semaphore
+#define percpu_rwsem_ptr   int
+#define percpu_down_read(x)(down_read(x), 0)
+#define percpu_up_read(x, y)   up_read(x)
+#define percpu_down_write  down_write
+#define percpu_up_writeup_write
+#define percpu_init_rwsem(x)   (({init_rwsem(x);}), 0)
+#define percpu_free_rwsem(x)   do { } while (0)
+
+#else
+
+struct percpu_rw_semaphore {
+   struct rw_semaphore __percpu *s;
+};
+
+typedef struct rw_semaphore *percpu_rwsem_ptr;
+
+static inline percpu_rwsem_ptr percpu_down_read(struct percpu_rw_semaphore 
*sem)
+{
+   struct rw_semaphore *s = __this_cpu_ptr(sem->s);
+   down_read(s);
+   return s;
+}
+
+static inline void percpu_up_read(struct percpu_rw_semaphore *sem, 
percpu_rwsem_ptr s)
+{
+   up_read(s);
+}
+
+static inline void percpu_down_write(struct percpu_rw_semaphore *sem)
+{
+   int cpu;
+   for_each_possible_cpu(cpu) {
+   struct rw_semaphore *s = per_cpu_ptr(sem->s, cpu);
+   down_write(s);
+   }
+}
+
+static inline void percpu_up_write(struct percpu_rw_semaphore *sem)
+{
+   int cpu;
+   for_each_possible_cpu(cpu) {
+   struct rw_semaphore *s = per_cpu_ptr(sem->s, cpu);
+   up_write(s);
+   }
+}
+
+static inline int percpu_init_rwsem(struct percpu_rw_semaphore *sem)
+{
+   int cpu;
+   sem->s = alloc_percpu(struct rw_semaphore);
+   if (unlikely(!sem->s))
+   return -ENOMEM;
+   for_each_possible_cpu(cpu) {
+   struct rw_semaphore *s = per_cpu_ptr(sem->s, cpu);
+   init_rwsem(s);
+   }
+   return 0;
+}
+
+static inline void percpu_free_rwsem(struct percpu_rw_semaphore *sem)
+{
+   free_percpu(sem->s);
+   sem->s = NULL;  /* catch use after free bugs */
+}
+
+#endif
+
+#endif
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/3] Introduce percpu rw semaphores

2012-07-28 Thread Mikulas Patocka
Introduce percpu rw semaphores

When many CPUs are locking a rw semaphore for read concurrently, cache
line bouncing occurs. When a CPU acquires rw semaphore for read, the
CPU writes to the cache line holding the semaphore. Consequently, the
cache line is being moved between CPUs and this slows down semaphore
acquisition.

This patch introduces new percpu rw semaphores. They are functionally
identical to existing rw semaphores, but locking the percpu rw semaphore
for read is faster and locking for write is slower.

The percpu rw semaphore is implemented as a percpu array of rw
semaphores, each semaphore for one CPU. When some thread needs to lock
the semaphore for read, only semaphore on the current CPU is locked for
read. When some thread needs to lock the semaphore for write, semaphores
for all CPUs are locked for write. This avoids cache line bouncing.

Note that the thread that is locking percpu rw semaphore may be
rescheduled, it doesn't cause bug, but cache line bouncing occurs in
this case.

Signed-off-by: Mikulas Patocka mpato...@redhat.com

---
 include/linux/percpu-rwsem.h |   77 +++
 1 file changed, 77 insertions(+)

Index: linux-3.5-fast/include/linux/percpu-rwsem.h
===
--- /dev/null   1970-01-01 00:00:00.0 +
+++ linux-3.5-fast/include/linux/percpu-rwsem.h 2012-07-28 18:41:05.0 
+0200
@@ -0,0 +1,77 @@
+#ifndef _LINUX_PERCPU_RWSEM_H
+#define _LINUX_PERCPU_RWSEM_H
+
+#include linux/rwsem.h
+#include linux/percpu.h
+
+#ifndef CONFIG_SMP
+
+#define percpu_rw_semaphorerw_semaphore
+#define percpu_rwsem_ptr   int
+#define percpu_down_read(x)(down_read(x), 0)
+#define percpu_up_read(x, y)   up_read(x)
+#define percpu_down_write  down_write
+#define percpu_up_writeup_write
+#define percpu_init_rwsem(x)   (({init_rwsem(x);}), 0)
+#define percpu_free_rwsem(x)   do { } while (0)
+
+#else
+
+struct percpu_rw_semaphore {
+   struct rw_semaphore __percpu *s;
+};
+
+typedef struct rw_semaphore *percpu_rwsem_ptr;
+
+static inline percpu_rwsem_ptr percpu_down_read(struct percpu_rw_semaphore 
*sem)
+{
+   struct rw_semaphore *s = __this_cpu_ptr(sem-s);
+   down_read(s);
+   return s;
+}
+
+static inline void percpu_up_read(struct percpu_rw_semaphore *sem, 
percpu_rwsem_ptr s)
+{
+   up_read(s);
+}
+
+static inline void percpu_down_write(struct percpu_rw_semaphore *sem)
+{
+   int cpu;
+   for_each_possible_cpu(cpu) {
+   struct rw_semaphore *s = per_cpu_ptr(sem-s, cpu);
+   down_write(s);
+   }
+}
+
+static inline void percpu_up_write(struct percpu_rw_semaphore *sem)
+{
+   int cpu;
+   for_each_possible_cpu(cpu) {
+   struct rw_semaphore *s = per_cpu_ptr(sem-s, cpu);
+   up_write(s);
+   }
+}
+
+static inline int percpu_init_rwsem(struct percpu_rw_semaphore *sem)
+{
+   int cpu;
+   sem-s = alloc_percpu(struct rw_semaphore);
+   if (unlikely(!sem-s))
+   return -ENOMEM;
+   for_each_possible_cpu(cpu) {
+   struct rw_semaphore *s = per_cpu_ptr(sem-s, cpu);
+   init_rwsem(s);
+   }
+   return 0;
+}
+
+static inline void percpu_free_rwsem(struct percpu_rw_semaphore *sem)
+{
+   free_percpu(sem-s);
+   sem-s = NULL;  /* catch use after free bugs */
+}
+
+#endif
+
+#endif
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/3] Introduce percpu rw semaphores

2012-07-28 Thread Eric Dumazet
On Sat, 2012-07-28 at 12:41 -0400, Mikulas Patocka wrote:
 Introduce percpu rw semaphores
 
 When many CPUs are locking a rw semaphore for read concurrently, cache
 line bouncing occurs. When a CPU acquires rw semaphore for read, the
 CPU writes to the cache line holding the semaphore. Consequently, the
 cache line is being moved between CPUs and this slows down semaphore
 acquisition.
 
 This patch introduces new percpu rw semaphores. They are functionally
 identical to existing rw semaphores, but locking the percpu rw semaphore
 for read is faster and locking for write is slower.
 
 The percpu rw semaphore is implemented as a percpu array of rw
 semaphores, each semaphore for one CPU. When some thread needs to lock
 the semaphore for read, only semaphore on the current CPU is locked for
 read. When some thread needs to lock the semaphore for write, semaphores
 for all CPUs are locked for write. This avoids cache line bouncing.
 
 Note that the thread that is locking percpu rw semaphore may be
 rescheduled, it doesn't cause bug, but cache line bouncing occurs in
 this case.
 
 Signed-off-by: Mikulas Patocka mpato...@redhat.com

I am curious to see how this performs with 4096 cpus ?

Really you shouldnt use rwlock in a path if this might hurt performance.

RCU is probably a better answer.

(bdev-bd_block_size should be read exactly once )



--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [dm-devel] [PATCH 2/3] Introduce percpu rw semaphores

2012-07-28 Thread Mikulas Patocka


On Sat, 28 Jul 2012, Eric Dumazet wrote:

 On Sat, 2012-07-28 at 12:41 -0400, Mikulas Patocka wrote:
  Introduce percpu rw semaphores
  
  When many CPUs are locking a rw semaphore for read concurrently, cache
  line bouncing occurs. When a CPU acquires rw semaphore for read, the
  CPU writes to the cache line holding the semaphore. Consequently, the
  cache line is being moved between CPUs and this slows down semaphore
  acquisition.
  
  This patch introduces new percpu rw semaphores. They are functionally
  identical to existing rw semaphores, but locking the percpu rw semaphore
  for read is faster and locking for write is slower.
  
  The percpu rw semaphore is implemented as a percpu array of rw
  semaphores, each semaphore for one CPU. When some thread needs to lock
  the semaphore for read, only semaphore on the current CPU is locked for
  read. When some thread needs to lock the semaphore for write, semaphores
  for all CPUs are locked for write. This avoids cache line bouncing.
  
  Note that the thread that is locking percpu rw semaphore may be
  rescheduled, it doesn't cause bug, but cache line bouncing occurs in
  this case.
  
  Signed-off-by: Mikulas Patocka mpato...@redhat.com
 
 I am curious to see how this performs with 4096 cpus ?

Each cpu should have its own rw semaphore in its cache, so I don't see a 
problem there.

When you change block size, all 4096 rw semaphores are locked for write, 
but changing block size is not a performance sensitive operation.

 Really you shouldnt use rwlock in a path if this might hurt performance.
 
 RCU is probably a better answer.

RCU is meaningless here. RCU allows lockless dereference of a pointer. 
Here the problem is not pointer dereference, the problem is that integer 
bd_block_size may change.

 (bdev-bd_block_size should be read exactly once )

Rewrite all direct and non-direct io code so that it reads block size just 
once ...

Mikulas
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/