Re: [PATCH 6/8] IB/hns: Replace counting semaphore event_sem with wait condition

2016-10-19 Thread Arnd Bergmann
On Tuesday, October 18, 2016 10:46:45 AM CEST Binoy Jayan wrote:
> Thank you for replying for the questions. I''ll look for alternatives
> for patches 6,7 and 8 and resend the series.

Ok, thanks!

I also looked at patch 8 some more and noticed that those four
functions all do the exact same sequence:

- initialize a mlx5_ib_umr_context on the stack
- assign "umrwr.wr.wr_cqe = _context.cqe"
- take the semaphore
- call ib_post_send with a single ib_send_wr
- wait for the mlx5_ib_umr_done() function to get called
- if we get back a failure, print a warning and return -EFAULT.
- release the semaphore

Moving all of these into a shared helper function would be
a good cleanup, and it leaves only a single function using
the semaphore, which can then be rewritten to use something
else.

The existing completion in there can be simplified to a
wait_event, since we are waiting for the return value to
be filled.

Arnd


Re: [PATCH 6/8] IB/hns: Replace counting semaphore event_sem with wait condition

2016-10-19 Thread Arnd Bergmann
On Tuesday, October 18, 2016 10:46:45 AM CEST Binoy Jayan wrote:
> Thank you for replying for the questions. I''ll look for alternatives
> for patches 6,7 and 8 and resend the series.

Ok, thanks!

I also looked at patch 8 some more and noticed that those four
functions all do the exact same sequence:

- initialize a mlx5_ib_umr_context on the stack
- assign "umrwr.wr.wr_cqe = _context.cqe"
- take the semaphore
- call ib_post_send with a single ib_send_wr
- wait for the mlx5_ib_umr_done() function to get called
- if we get back a failure, print a warning and return -EFAULT.
- release the semaphore

Moving all of these into a shared helper function would be
a good cleanup, and it leaves only a single function using
the semaphore, which can then be rewritten to use something
else.

The existing completion in there can be simplified to a
wait_event, since we are waiting for the return value to
be filled.

Arnd


Re: [PATCH 6/8] IB/hns: Replace counting semaphore event_sem with wait condition

2016-10-17 Thread Binoy Jayan
On 18 October 2016 at 01:59, Arnd Bergmann  wrote:
> On Monday, October 17, 2016 10:01:00 PM CEST Binoy Jayan wrote:
>> --- a/drivers/infiniband/hw/hns/hns_roce_cmd.c
>> +++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c
>> @@ -248,10 +248,14 @@ static int hns_roce_cmd_mbox_wait(struct hns_roce_dev 
>> *hr_dev, u64 in_param,
>>  {
>> int ret = 0;
>>
>> -   down(_dev->cmd.event_sem);
>> +   wait_event(hr_dev->cmd.event_sem.wq,
>> +  atomic_add_unless(_dev->cmd.event_sem.count, -1, 0));
>> +
>> ret = __hns_roce_cmd_mbox_wait(hr_dev, in_param, out_param,
>>in_modifier, op_modifier, op, 
>> timeout);
>> -   up(_dev->cmd.event_sem);
>> +
>> +   if (atomic_inc_return(_dev->cmd.event_sem.count) == 1)
>> +   wake_up(_dev->cmd.event_sem.wq);
>>
>> return ret;
>>  }
>
> This is the only interesting use of the event_sem that cares about
> the counting and it protects the __hns_roce_cmd_mbox_wait() from being
> entered too often. The count here is the number of size of the
> hr_dev->cmd.context[] array.
>
> However, that function already use a spinlock to protect that array
> and pick the correct context. I think changing the inner function
> to handle the case of 'no context available' by using a waitqueue
> without counting anything would be a reasonable transformation
> away from the semaphore.
>
> Arnd


Hi Arnd,

Thank you for replying for the questions. I''ll look for alternatives
for patches
6,7 and 8 and resend the series.

-Binoy


Re: [PATCH 6/8] IB/hns: Replace counting semaphore event_sem with wait condition

2016-10-17 Thread Binoy Jayan
On 18 October 2016 at 01:59, Arnd Bergmann  wrote:
> On Monday, October 17, 2016 10:01:00 PM CEST Binoy Jayan wrote:
>> --- a/drivers/infiniband/hw/hns/hns_roce_cmd.c
>> +++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c
>> @@ -248,10 +248,14 @@ static int hns_roce_cmd_mbox_wait(struct hns_roce_dev 
>> *hr_dev, u64 in_param,
>>  {
>> int ret = 0;
>>
>> -   down(_dev->cmd.event_sem);
>> +   wait_event(hr_dev->cmd.event_sem.wq,
>> +  atomic_add_unless(_dev->cmd.event_sem.count, -1, 0));
>> +
>> ret = __hns_roce_cmd_mbox_wait(hr_dev, in_param, out_param,
>>in_modifier, op_modifier, op, 
>> timeout);
>> -   up(_dev->cmd.event_sem);
>> +
>> +   if (atomic_inc_return(_dev->cmd.event_sem.count) == 1)
>> +   wake_up(_dev->cmd.event_sem.wq);
>>
>> return ret;
>>  }
>
> This is the only interesting use of the event_sem that cares about
> the counting and it protects the __hns_roce_cmd_mbox_wait() from being
> entered too often. The count here is the number of size of the
> hr_dev->cmd.context[] array.
>
> However, that function already use a spinlock to protect that array
> and pick the correct context. I think changing the inner function
> to handle the case of 'no context available' by using a waitqueue
> without counting anything would be a reasonable transformation
> away from the semaphore.
>
> Arnd


Hi Arnd,

Thank you for replying for the questions. I''ll look for alternatives
for patches
6,7 and 8 and resend the series.

-Binoy


Re: [PATCH 6/8] IB/hns: Replace counting semaphore event_sem with wait condition

2016-10-17 Thread Arnd Bergmann
On Monday, October 17, 2016 10:01:00 PM CEST Binoy Jayan wrote:
> --- a/drivers/infiniband/hw/hns/hns_roce_cmd.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c
> @@ -248,10 +248,14 @@ static int hns_roce_cmd_mbox_wait(struct hns_roce_dev 
> *hr_dev, u64 in_param,
>  {
> int ret = 0;
>  
> -   down(_dev->cmd.event_sem);
> +   wait_event(hr_dev->cmd.event_sem.wq,
> +  atomic_add_unless(_dev->cmd.event_sem.count, -1, 0));
> +
> ret = __hns_roce_cmd_mbox_wait(hr_dev, in_param, out_param,
>in_modifier, op_modifier, op, timeout);
> -   up(_dev->cmd.event_sem);
> +
> +   if (atomic_inc_return(_dev->cmd.event_sem.count) == 1)
> +   wake_up(_dev->cmd.event_sem.wq);
>  
> return ret;
>  }

This is the only interesting use of the event_sem that cares about
the counting and it protects the __hns_roce_cmd_mbox_wait() from being
entered too often. The count here is the number of size of the
hr_dev->cmd.context[] array.

However, that function already use a spinlock to protect that array
and pick the correct context. I think changing the inner function
to handle the case of 'no context available' by using a waitqueue
without counting anything would be a reasonable transformation
away from the semaphore.

Arnd


Re: [PATCH 6/8] IB/hns: Replace counting semaphore event_sem with wait condition

2016-10-17 Thread Arnd Bergmann
On Monday, October 17, 2016 10:01:00 PM CEST Binoy Jayan wrote:
> --- a/drivers/infiniband/hw/hns/hns_roce_cmd.c
> +++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c
> @@ -248,10 +248,14 @@ static int hns_roce_cmd_mbox_wait(struct hns_roce_dev 
> *hr_dev, u64 in_param,
>  {
> int ret = 0;
>  
> -   down(_dev->cmd.event_sem);
> +   wait_event(hr_dev->cmd.event_sem.wq,
> +  atomic_add_unless(_dev->cmd.event_sem.count, -1, 0));
> +
> ret = __hns_roce_cmd_mbox_wait(hr_dev, in_param, out_param,
>in_modifier, op_modifier, op, timeout);
> -   up(_dev->cmd.event_sem);
> +
> +   if (atomic_inc_return(_dev->cmd.event_sem.count) == 1)
> +   wake_up(_dev->cmd.event_sem.wq);
>  
> return ret;
>  }

This is the only interesting use of the event_sem that cares about
the counting and it protects the __hns_roce_cmd_mbox_wait() from being
entered too often. The count here is the number of size of the
hr_dev->cmd.context[] array.

However, that function already use a spinlock to protect that array
and pick the correct context. I think changing the inner function
to handle the case of 'no context available' by using a waitqueue
without counting anything would be a reasonable transformation
away from the semaphore.

Arnd


Re: [PATCH 6/8] IB/hns: Replace counting semaphore event_sem with wait condition

2016-10-17 Thread Christoph Hellwig
On Mon, Oct 17, 2016 at 10:01:00PM +0530, Binoy Jayan wrote:
> Counting semaphores are going away in the future, so replace the semaphore
> hns_roce_cmdq::event_sem with an open-coded implementation.

Sorry, but no.  Using a proper semaphore abstraction is always
better than open coding it.  While most semaphore users should move
to better primitives, those that actually are counting semaphore
should stick to the existing primitive.


Re: [PATCH 6/8] IB/hns: Replace counting semaphore event_sem with wait condition

2016-10-17 Thread Christoph Hellwig
On Mon, Oct 17, 2016 at 10:01:00PM +0530, Binoy Jayan wrote:
> Counting semaphores are going away in the future, so replace the semaphore
> hns_roce_cmdq::event_sem with an open-coded implementation.

Sorry, but no.  Using a proper semaphore abstraction is always
better than open coding it.  While most semaphore users should move
to better primitives, those that actually are counting semaphore
should stick to the existing primitive.


[PATCH 6/8] IB/hns: Replace counting semaphore event_sem with wait condition

2016-10-17 Thread Binoy Jayan
Counting semaphores are going away in the future, so replace the semaphore
hns_roce_cmdq::event_sem with an open-coded implementation.

Signed-off-by: Binoy Jayan 
---
 drivers/infiniband/hw/hns/hns_roce_cmd.c| 16 
 drivers/infiniband/hw/hns/hns_roce_device.h |  3 ++-
 include/rdma/ib_sa.h|  5 +
 3 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.c 
b/drivers/infiniband/hw/hns/hns_roce_cmd.c
index 1421fdb..3e76717 100644
--- a/drivers/infiniband/hw/hns/hns_roce_cmd.c
+++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c
@@ -248,10 +248,14 @@ static int hns_roce_cmd_mbox_wait(struct hns_roce_dev 
*hr_dev, u64 in_param,
 {
int ret = 0;
 
-   down(_dev->cmd.event_sem);
+   wait_event(hr_dev->cmd.event_sem.wq,
+  atomic_add_unless(_dev->cmd.event_sem.count, -1, 0));
+
ret = __hns_roce_cmd_mbox_wait(hr_dev, in_param, out_param,
   in_modifier, op_modifier, op, timeout);
-   up(_dev->cmd.event_sem);
+
+   if (atomic_inc_return(_dev->cmd.event_sem.count) == 1)
+   wake_up(_dev->cmd.event_sem.wq);
 
return ret;
 }
@@ -313,7 +317,9 @@ int hns_roce_cmd_use_events(struct hns_roce_dev *hr_dev)
hr_cmd->context[hr_cmd->max_cmds - 1].next = -1;
hr_cmd->free_head = 0;
 
-   sema_init(_cmd->event_sem, hr_cmd->max_cmds);
+   init_waitqueue_head(_cmd->event_sem.wq);
+   atomic_set(_cmd->event_sem.count, hr_cmd->max_cmds);
+
spin_lock_init(_cmd->context_lock);
 
hr_cmd->token_mask = CMD_TOKEN_MASK;
@@ -332,7 +338,9 @@ void hns_roce_cmd_use_polling(struct hns_roce_dev *hr_dev)
hr_cmd->use_events = 0;
 
for (i = 0; i < hr_cmd->max_cmds; ++i)
-   down(_cmd->event_sem);
+   wait_event(hr_cmd->event_sem.wq,
+  atomic_add_unless(
+  _dev->cmd.event_sem.count, -1, 0));
 
kfree(hr_cmd->context);
mutex_unlock(_cmd->poll_mutex);
diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h 
b/drivers/infiniband/hw/hns/hns_roce_device.h
index 2afe075..6aed04a 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -34,6 +34,7 @@
 #define _HNS_ROCE_DEVICE_H
 
 #include 
+#include 
 #include 
 
 #define DRV_NAME "hns_roce"
@@ -364,7 +365,7 @@ struct hns_roce_cmdq {
* Event mode: cmd register mutex protection,
* ensure to not exceed max_cmds and user use limit region
*/
-   struct semaphoreevent_sem;
+   struct ib_semaphore event_sem;
int max_cmds;
spinlock_t  context_lock;
int free_head;
diff --git a/include/rdma/ib_sa.h b/include/rdma/ib_sa.h
index 5ee7aab..1901042 100644
--- a/include/rdma/ib_sa.h
+++ b/include/rdma/ib_sa.h
@@ -291,6 +291,11 @@ struct ib_sa_service_rec {
 #define IB_SA_GUIDINFO_REC_GID6IB_SA_COMP_MASK(10)
 #define IB_SA_GUIDINFO_REC_GID7IB_SA_COMP_MASK(11)
 
+struct ib_semaphore {
+   wait_queue_head_t wq;
+   atomic_t count;
+};
+
 struct ib_sa_guidinfo_rec {
__be16  lid;
u8  block_num;
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project



[PATCH 6/8] IB/hns: Replace counting semaphore event_sem with wait condition

2016-10-17 Thread Binoy Jayan
Counting semaphores are going away in the future, so replace the semaphore
hns_roce_cmdq::event_sem with an open-coded implementation.

Signed-off-by: Binoy Jayan 
---
 drivers/infiniband/hw/hns/hns_roce_cmd.c| 16 
 drivers/infiniband/hw/hns/hns_roce_device.h |  3 ++-
 include/rdma/ib_sa.h|  5 +
 3 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/drivers/infiniband/hw/hns/hns_roce_cmd.c 
b/drivers/infiniband/hw/hns/hns_roce_cmd.c
index 1421fdb..3e76717 100644
--- a/drivers/infiniband/hw/hns/hns_roce_cmd.c
+++ b/drivers/infiniband/hw/hns/hns_roce_cmd.c
@@ -248,10 +248,14 @@ static int hns_roce_cmd_mbox_wait(struct hns_roce_dev 
*hr_dev, u64 in_param,
 {
int ret = 0;
 
-   down(_dev->cmd.event_sem);
+   wait_event(hr_dev->cmd.event_sem.wq,
+  atomic_add_unless(_dev->cmd.event_sem.count, -1, 0));
+
ret = __hns_roce_cmd_mbox_wait(hr_dev, in_param, out_param,
   in_modifier, op_modifier, op, timeout);
-   up(_dev->cmd.event_sem);
+
+   if (atomic_inc_return(_dev->cmd.event_sem.count) == 1)
+   wake_up(_dev->cmd.event_sem.wq);
 
return ret;
 }
@@ -313,7 +317,9 @@ int hns_roce_cmd_use_events(struct hns_roce_dev *hr_dev)
hr_cmd->context[hr_cmd->max_cmds - 1].next = -1;
hr_cmd->free_head = 0;
 
-   sema_init(_cmd->event_sem, hr_cmd->max_cmds);
+   init_waitqueue_head(_cmd->event_sem.wq);
+   atomic_set(_cmd->event_sem.count, hr_cmd->max_cmds);
+
spin_lock_init(_cmd->context_lock);
 
hr_cmd->token_mask = CMD_TOKEN_MASK;
@@ -332,7 +338,9 @@ void hns_roce_cmd_use_polling(struct hns_roce_dev *hr_dev)
hr_cmd->use_events = 0;
 
for (i = 0; i < hr_cmd->max_cmds; ++i)
-   down(_cmd->event_sem);
+   wait_event(hr_cmd->event_sem.wq,
+  atomic_add_unless(
+  _dev->cmd.event_sem.count, -1, 0));
 
kfree(hr_cmd->context);
mutex_unlock(_cmd->poll_mutex);
diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h 
b/drivers/infiniband/hw/hns/hns_roce_device.h
index 2afe075..6aed04a 100644
--- a/drivers/infiniband/hw/hns/hns_roce_device.h
+++ b/drivers/infiniband/hw/hns/hns_roce_device.h
@@ -34,6 +34,7 @@
 #define _HNS_ROCE_DEVICE_H
 
 #include 
+#include 
 #include 
 
 #define DRV_NAME "hns_roce"
@@ -364,7 +365,7 @@ struct hns_roce_cmdq {
* Event mode: cmd register mutex protection,
* ensure to not exceed max_cmds and user use limit region
*/
-   struct semaphoreevent_sem;
+   struct ib_semaphore event_sem;
int max_cmds;
spinlock_t  context_lock;
int free_head;
diff --git a/include/rdma/ib_sa.h b/include/rdma/ib_sa.h
index 5ee7aab..1901042 100644
--- a/include/rdma/ib_sa.h
+++ b/include/rdma/ib_sa.h
@@ -291,6 +291,11 @@ struct ib_sa_service_rec {
 #define IB_SA_GUIDINFO_REC_GID6IB_SA_COMP_MASK(10)
 #define IB_SA_GUIDINFO_REC_GID7IB_SA_COMP_MASK(11)
 
+struct ib_semaphore {
+   wait_queue_head_t wq;
+   atomic_t count;
+};
+
 struct ib_sa_guidinfo_rec {
__be16  lid;
u8  block_num;
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project