Re: [PATCH net-next 6/8] net: thunderx: Rework interrupt handler

2015-08-30 Thread Sunil Kovvuri
On Sun, Aug 30, 2015 at 2:50 PM, Aleksey Makarov  wrote:
> On 29.08.2015 04:44, Alexey Klimov wrote:
>
>>> -static irqreturn_t nicvf_intr_handler(int irq, void *nicvf_irq)
>>> +static irqreturn_t nicvf_intr_handler(int irq, void *cq_irq)
>>> +{
>>> +   struct nicvf_cq_poll *cq_poll = (struct nicvf_cq_poll *)cq_irq;
>>> +   struct nicvf *nic = cq_poll->nicvf;
>>> +   int qidx = cq_poll->cq_idx;
>>> +
>>> +   nicvf_dump_intr_status(nic);
>>> +
>>> +   /* Disable interrupts */
>>> +   nicvf_disable_intr(nic, NICVF_INTR_CQ, qidx);
>>> +
>>> +   /* Schedule NAPI */
>>> +   napi_schedule(_poll->napi);
>>> +
>>> +   /* Clear interrupt */
>>> +   nicvf_clear_intr(nic, NICVF_INTR_CQ, qidx);
>>> +
>>> +   return IRQ_HANDLED;
>>> +}
>>
>>
>> You're not considering spurious irqs in all new irq handlers here and
>> below and schedule napi/tasklets unconditionally. Is it correct?
>> For me it looks like previous implementation relied on reading of
>> NIC_VF_INT to understand irq type and what actions should be
>> performed. It generally had idea that no interrupt might occur.
>
>
> 1. The previous version of the handler did not handle spurious interrupts
> either.  Probably that means that the author of the patch knows for sure
> that they do not happen.

Yes, no spurious interrupts are expected from hardware.
Even if it does the NAPI poll routine will handle it as valid
descriptor count would be zero.
Don't think it makes sense to check for spurious interrupt upon every interrupt.

>
> 2. Instead of reading the status register new version registers different
> handlers for different irqs.  I don't see why it can be wrong.

Previous implementation results on scheduling multiple NAPI poll handlers
on the same CPU even if IRQ's affinities are set to different CPUs.
Hence they are seperated now.

>
> I am going to address your other suggestions in the next version of the
> patchset.
>
> Thank you
> Aleksey Makarov
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH net-next 6/8] net: thunderx: Rework interrupt handler

2015-08-30 Thread Aleksey Makarov

On 29.08.2015 04:44, Alexey Klimov wrote:


-static irqreturn_t nicvf_intr_handler(int irq, void *nicvf_irq)
+static irqreturn_t nicvf_intr_handler(int irq, void *cq_irq)
+{
+   struct nicvf_cq_poll *cq_poll = (struct nicvf_cq_poll *)cq_irq;
+   struct nicvf *nic = cq_poll->nicvf;
+   int qidx = cq_poll->cq_idx;
+
+   nicvf_dump_intr_status(nic);
+
+   /* Disable interrupts */
+   nicvf_disable_intr(nic, NICVF_INTR_CQ, qidx);
+
+   /* Schedule NAPI */
+   napi_schedule(_poll->napi);
+
+   /* Clear interrupt */
+   nicvf_clear_intr(nic, NICVF_INTR_CQ, qidx);
+
+   return IRQ_HANDLED;
+}


You're not considering spurious irqs in all new irq handlers here and
below and schedule napi/tasklets unconditionally. Is it correct?
For me it looks like previous implementation relied on reading of
NIC_VF_INT to understand irq type and what actions should be
performed. It generally had idea that no interrupt might occur.


1. The previous version of the handler did not handle spurious 
interrupts either.  Probably that means that the author of the patch 
knows for sure that they do not happen.


2. Instead of reading the status register new version registers 
different handlers for different irqs.  I don't see why it can be wrong.


I am going to address your other suggestions in the next version of the 
patchset.


Thank you
Aleksey Makarov


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH net-next 6/8] net: thunderx: Rework interrupt handler

2015-08-30 Thread Aleksey Makarov

On 29.08.2015 04:44, Alexey Klimov wrote:


-static irqreturn_t nicvf_intr_handler(int irq, void *nicvf_irq)
+static irqreturn_t nicvf_intr_handler(int irq, void *cq_irq)
+{
+   struct nicvf_cq_poll *cq_poll = (struct nicvf_cq_poll *)cq_irq;
+   struct nicvf *nic = cq_poll-nicvf;
+   int qidx = cq_poll-cq_idx;
+
+   nicvf_dump_intr_status(nic);
+
+   /* Disable interrupts */
+   nicvf_disable_intr(nic, NICVF_INTR_CQ, qidx);
+
+   /* Schedule NAPI */
+   napi_schedule(cq_poll-napi);
+
+   /* Clear interrupt */
+   nicvf_clear_intr(nic, NICVF_INTR_CQ, qidx);
+
+   return IRQ_HANDLED;
+}


You're not considering spurious irqs in all new irq handlers here and
below and schedule napi/tasklets unconditionally. Is it correct?
For me it looks like previous implementation relied on reading of
NIC_VF_INT to understand irq type and what actions should be
performed. It generally had idea that no interrupt might occur.


1. The previous version of the handler did not handle spurious 
interrupts either.  Probably that means that the author of the patch 
knows for sure that they do not happen.


2. Instead of reading the status register new version registers 
different handlers for different irqs.  I don't see why it can be wrong.


I am going to address your other suggestions in the next version of the 
patchset.


Thank you
Aleksey Makarov


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH net-next 6/8] net: thunderx: Rework interrupt handler

2015-08-30 Thread Sunil Kovvuri
On Sun, Aug 30, 2015 at 2:50 PM, Aleksey Makarov feumil...@gmail.com wrote:
 On 29.08.2015 04:44, Alexey Klimov wrote:

 -static irqreturn_t nicvf_intr_handler(int irq, void *nicvf_irq)
 +static irqreturn_t nicvf_intr_handler(int irq, void *cq_irq)
 +{
 +   struct nicvf_cq_poll *cq_poll = (struct nicvf_cq_poll *)cq_irq;
 +   struct nicvf *nic = cq_poll-nicvf;
 +   int qidx = cq_poll-cq_idx;
 +
 +   nicvf_dump_intr_status(nic);
 +
 +   /* Disable interrupts */
 +   nicvf_disable_intr(nic, NICVF_INTR_CQ, qidx);
 +
 +   /* Schedule NAPI */
 +   napi_schedule(cq_poll-napi);
 +
 +   /* Clear interrupt */
 +   nicvf_clear_intr(nic, NICVF_INTR_CQ, qidx);
 +
 +   return IRQ_HANDLED;
 +}


 You're not considering spurious irqs in all new irq handlers here and
 below and schedule napi/tasklets unconditionally. Is it correct?
 For me it looks like previous implementation relied on reading of
 NIC_VF_INT to understand irq type and what actions should be
 performed. It generally had idea that no interrupt might occur.


 1. The previous version of the handler did not handle spurious interrupts
 either.  Probably that means that the author of the patch knows for sure
 that they do not happen.

Yes, no spurious interrupts are expected from hardware.
Even if it does the NAPI poll routine will handle it as valid
descriptor count would be zero.
Don't think it makes sense to check for spurious interrupt upon every interrupt.


 2. Instead of reading the status register new version registers different
 handlers for different irqs.  I don't see why it can be wrong.

Previous implementation results on scheduling multiple NAPI poll handlers
on the same CPU even if IRQ's affinities are set to different CPUs.
Hence they are seperated now.


 I am going to address your other suggestions in the next version of the
 patchset.

 Thank you
 Aleksey Makarov

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH net-next 6/8] net: thunderx: Rework interrupt handler

2015-08-28 Thread Alexey Klimov
Hi Aleksey,

let me add few minor points below.

On Fri, Aug 28, 2015 at 5:59 PM, Aleksey Makarov
 wrote:
> From: Sunil Goutham 
>
> Rework interrupt handler to avoid checking IRQ affinity of
> CQ interrupts. Now separate handlers are registered for each IRQ
> including RBDR. Also register interrupt handlers for only those
> which are being used.

Also add nicvf_dump_intr_status() and use it in irq handler(s).
I suggest to check and extend commit message and think about commit
name. Maybe "net: thunderx: rework interrupt handling and
registration" at least?

Please consider possibility of splitting this patch into few patches too.

>
> Signed-off-by: Sunil Goutham 
> Signed-off-by: Aleksey Makarov 
> ---
>  drivers/net/ethernet/cavium/thunder/nic.h  |   1 +
>  drivers/net/ethernet/cavium/thunder/nicvf_main.c   | 172 
> -
>  drivers/net/ethernet/cavium/thunder/nicvf_queues.h |   2 +
>  3 files changed, 103 insertions(+), 72 deletions(-)
>
> diff --git a/drivers/net/ethernet/cavium/thunder/nic.h 
> b/drivers/net/ethernet/cavium/thunder/nic.h
> index a83f567..89b997e 100644
> --- a/drivers/net/ethernet/cavium/thunder/nic.h
> +++ b/drivers/net/ethernet/cavium/thunder/nic.h
> @@ -135,6 +135,7 @@
>  #defineNICVF_TX_TIMEOUT(50 * HZ)
>
>  struct nicvf_cq_poll {
> +   struct  nicvf *nicvf;
> u8  cq_idx; /* Completion queue index */
> struct  napi_struct napi;
>  };
> diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c 
> b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
> index de51828..2198f61 100644
> --- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
> +++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
> @@ -653,11 +653,20 @@ static void nicvf_handle_qs_err(unsigned long data)
> nicvf_enable_intr(nic, NICVF_INTR_QS_ERR, 0);
>  }
>
> +static inline void nicvf_dump_intr_status(struct nicvf *nic)
> +{
> +   if (netif_msg_intr(nic))
> +   netdev_info(nic->netdev, "%s: interrupt status 0x%llx\n",
> +   nic->netdev->name, nicvf_reg_read(nic, 
> NIC_VF_INT));
> +}

Please check if you really need to mark this 'inline' here.

>  static irqreturn_t nicvf_misc_intr_handler(int irq, void *nicvf_irq)
>  {
> struct nicvf *nic = (struct nicvf *)nicvf_irq;
> u64 intr;
>
> +   nicvf_dump_intr_status(nic);
> +
> intr = nicvf_reg_read(nic, NIC_VF_INT);
> /* Check for spurious interrupt */
> if (!(intr & NICVF_INTR_MBOX_MASK))
> @@ -668,59 +677,58 @@ static irqreturn_t nicvf_misc_intr_handler(int irq, 
> void *nicvf_irq)
> return IRQ_HANDLED;
>  }
>
> -static irqreturn_t nicvf_intr_handler(int irq, void *nicvf_irq)
> +static irqreturn_t nicvf_intr_handler(int irq, void *cq_irq)
> +{
> +   struct nicvf_cq_poll *cq_poll = (struct nicvf_cq_poll *)cq_irq;
> +   struct nicvf *nic = cq_poll->nicvf;
> +   int qidx = cq_poll->cq_idx;
> +
> +   nicvf_dump_intr_status(nic);
> +
> +   /* Disable interrupts */
> +   nicvf_disable_intr(nic, NICVF_INTR_CQ, qidx);
> +
> +   /* Schedule NAPI */
> +   napi_schedule(_poll->napi);
> +
> +   /* Clear interrupt */
> +   nicvf_clear_intr(nic, NICVF_INTR_CQ, qidx);
> +
> +   return IRQ_HANDLED;
> +}

You're not considering spurious irqs in all new irq handlers here and
below and schedule napi/tasklets unconditionally. Is it correct?
For me it looks like previous implementation relied on reading of
NIC_VF_INT to understand irq type and what actions should be
performed. It generally had idea that no interrupt might occur.


> +
> +static irqreturn_t nicvf_rbdr_intr_handler(int irq, void *nicvf_irq)
>  {
> -   u64 qidx, intr, clear_intr = 0;
> -   u64 cq_intr, rbdr_intr, qs_err_intr;
> struct nicvf *nic = (struct nicvf *)nicvf_irq;
> -   struct queue_set *qs = nic->qs;
> -   struct nicvf_cq_poll *cq_poll = NULL;
> +   u8 qidx;
>
> -   intr = nicvf_reg_read(nic, NIC_VF_INT);
> -   if (netif_msg_intr(nic))
> -   netdev_info(nic->netdev, "%s: interrupt status 0x%llx\n",
> -   nic->netdev->name, intr);
> -
> -   qs_err_intr = intr & NICVF_INTR_QS_ERR_MASK;
> -   if (qs_err_intr) {
> -   /* Disable Qset err interrupt and schedule softirq */
> -   nicvf_disable_intr(nic, NICVF_INTR_QS_ERR, 0);
> -   tasklet_hi_schedule(>qs_err_task);
> -   clear_intr |= qs_err_intr;
> -   }
>
> -   /* Disable interrupts and start polling */
> -   cq_intr = (intr & NICVF_INTR_CQ_MASK) >> NICVF_INTR_CQ_SHIFT;
> -   for (qidx = 0; qidx < qs->cq_cnt; qidx++) {
> -   if (!(cq_intr & (1 << qidx)))
> -   continue;
> -   if (!nicvf_is_intr_enabled(nic, NICVF_INTR_CQ, qidx))
> +   nicvf_dump_intr_status(nic);
> +
> +   /* Disable RBDR interrupt and schedule softirq */
> +   for (qidx = 0; qidx < 

Re: [PATCH net-next 6/8] net: thunderx: Rework interrupt handler

2015-08-28 Thread David Miller
From: Aleksey Makarov 
Date: Fri, 28 Aug 2015 17:59:58 +0300

> @@ -251,6 +251,8 @@ struct cmp_queue {
>   void*desc;
>   struct q_desc_mem   dmem;
>   struct cmp_queue_stats  stats;
> + int irq;
> + cpumask_t   affinity_mask;
>  } cacheline_aligned_in_smp;
>  
>  struct snd_queue {

This new "affinity_mask" member is not used, please remove it and respin
this patch series.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH net-next 6/8] net: thunderx: Rework interrupt handler

2015-08-28 Thread Aleksey Makarov
From: Sunil Goutham 

Rework interrupt handler to avoid checking IRQ affinity of
CQ interrupts. Now separate handlers are registered for each IRQ
including RBDR. Also register interrupt handlers for only those
which are being used.

Signed-off-by: Sunil Goutham 
Signed-off-by: Aleksey Makarov 
---
 drivers/net/ethernet/cavium/thunder/nic.h  |   1 +
 drivers/net/ethernet/cavium/thunder/nicvf_main.c   | 172 -
 drivers/net/ethernet/cavium/thunder/nicvf_queues.h |   2 +
 3 files changed, 103 insertions(+), 72 deletions(-)

diff --git a/drivers/net/ethernet/cavium/thunder/nic.h 
b/drivers/net/ethernet/cavium/thunder/nic.h
index a83f567..89b997e 100644
--- a/drivers/net/ethernet/cavium/thunder/nic.h
+++ b/drivers/net/ethernet/cavium/thunder/nic.h
@@ -135,6 +135,7 @@
 #defineNICVF_TX_TIMEOUT(50 * HZ)
 
 struct nicvf_cq_poll {
+   struct  nicvf *nicvf;
u8  cq_idx; /* Completion queue index */
struct  napi_struct napi;
 };
diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c 
b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
index de51828..2198f61 100644
--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
@@ -653,11 +653,20 @@ static void nicvf_handle_qs_err(unsigned long data)
nicvf_enable_intr(nic, NICVF_INTR_QS_ERR, 0);
 }
 
+static inline void nicvf_dump_intr_status(struct nicvf *nic)
+{
+   if (netif_msg_intr(nic))
+   netdev_info(nic->netdev, "%s: interrupt status 0x%llx\n",
+   nic->netdev->name, nicvf_reg_read(nic, NIC_VF_INT));
+}
+
 static irqreturn_t nicvf_misc_intr_handler(int irq, void *nicvf_irq)
 {
struct nicvf *nic = (struct nicvf *)nicvf_irq;
u64 intr;
 
+   nicvf_dump_intr_status(nic);
+
intr = nicvf_reg_read(nic, NIC_VF_INT);
/* Check for spurious interrupt */
if (!(intr & NICVF_INTR_MBOX_MASK))
@@ -668,59 +677,58 @@ static irqreturn_t nicvf_misc_intr_handler(int irq, void 
*nicvf_irq)
return IRQ_HANDLED;
 }
 
-static irqreturn_t nicvf_intr_handler(int irq, void *nicvf_irq)
+static irqreturn_t nicvf_intr_handler(int irq, void *cq_irq)
+{
+   struct nicvf_cq_poll *cq_poll = (struct nicvf_cq_poll *)cq_irq;
+   struct nicvf *nic = cq_poll->nicvf;
+   int qidx = cq_poll->cq_idx;
+
+   nicvf_dump_intr_status(nic);
+
+   /* Disable interrupts */
+   nicvf_disable_intr(nic, NICVF_INTR_CQ, qidx);
+
+   /* Schedule NAPI */
+   napi_schedule(_poll->napi);
+
+   /* Clear interrupt */
+   nicvf_clear_intr(nic, NICVF_INTR_CQ, qidx);
+
+   return IRQ_HANDLED;
+}
+
+static irqreturn_t nicvf_rbdr_intr_handler(int irq, void *nicvf_irq)
 {
-   u64 qidx, intr, clear_intr = 0;
-   u64 cq_intr, rbdr_intr, qs_err_intr;
struct nicvf *nic = (struct nicvf *)nicvf_irq;
-   struct queue_set *qs = nic->qs;
-   struct nicvf_cq_poll *cq_poll = NULL;
+   u8 qidx;
 
-   intr = nicvf_reg_read(nic, NIC_VF_INT);
-   if (netif_msg_intr(nic))
-   netdev_info(nic->netdev, "%s: interrupt status 0x%llx\n",
-   nic->netdev->name, intr);
-
-   qs_err_intr = intr & NICVF_INTR_QS_ERR_MASK;
-   if (qs_err_intr) {
-   /* Disable Qset err interrupt and schedule softirq */
-   nicvf_disable_intr(nic, NICVF_INTR_QS_ERR, 0);
-   tasklet_hi_schedule(>qs_err_task);
-   clear_intr |= qs_err_intr;
-   }
 
-   /* Disable interrupts and start polling */
-   cq_intr = (intr & NICVF_INTR_CQ_MASK) >> NICVF_INTR_CQ_SHIFT;
-   for (qidx = 0; qidx < qs->cq_cnt; qidx++) {
-   if (!(cq_intr & (1 << qidx)))
-   continue;
-   if (!nicvf_is_intr_enabled(nic, NICVF_INTR_CQ, qidx))
+   nicvf_dump_intr_status(nic);
+
+   /* Disable RBDR interrupt and schedule softirq */
+   for (qidx = 0; qidx < nic->qs->rbdr_cnt; qidx++) {
+   if (!nicvf_is_intr_enabled(nic, NICVF_INTR_RBDR, qidx))
continue;
+   nicvf_disable_intr(nic, NICVF_INTR_RBDR, qidx);
+   tasklet_hi_schedule(>rbdr_task);
+   /* Clear interrupt */
+   nicvf_clear_intr(nic, NICVF_INTR_RBDR, qidx);
+   }
 
-   nicvf_disable_intr(nic, NICVF_INTR_CQ, qidx);
-   clear_intr |= ((1 << qidx) << NICVF_INTR_CQ_SHIFT);
+   return IRQ_HANDLED;
+}
 
-   cq_poll = nic->napi[qidx];
-   /* Schedule NAPI */
-   if (cq_poll)
-   napi_schedule(_poll->napi);
-   }
+static irqreturn_t nicvf_qs_err_intr_handler(int irq, void *nicvf_irq)
+{
+   struct nicvf *nic = (struct nicvf *)nicvf_irq;
 
-   /* Handle RBDR interrupts */
-   rbdr_intr = (intr & NICVF_INTR_RBDR_MASK) >> NICVF_INTR_RBDR_SHIFT;
-   if (rbdr_intr) {
-   /* Disable RBDR 

[PATCH net-next 6/8] net: thunderx: Rework interrupt handler

2015-08-28 Thread Aleksey Makarov
From: Sunil Goutham sgout...@cavium.com

Rework interrupt handler to avoid checking IRQ affinity of
CQ interrupts. Now separate handlers are registered for each IRQ
including RBDR. Also register interrupt handlers for only those
which are being used.

Signed-off-by: Sunil Goutham sgout...@cavium.com
Signed-off-by: Aleksey Makarov aleksey.maka...@caviumnetworks.com
---
 drivers/net/ethernet/cavium/thunder/nic.h  |   1 +
 drivers/net/ethernet/cavium/thunder/nicvf_main.c   | 172 -
 drivers/net/ethernet/cavium/thunder/nicvf_queues.h |   2 +
 3 files changed, 103 insertions(+), 72 deletions(-)

diff --git a/drivers/net/ethernet/cavium/thunder/nic.h 
b/drivers/net/ethernet/cavium/thunder/nic.h
index a83f567..89b997e 100644
--- a/drivers/net/ethernet/cavium/thunder/nic.h
+++ b/drivers/net/ethernet/cavium/thunder/nic.h
@@ -135,6 +135,7 @@
 #defineNICVF_TX_TIMEOUT(50 * HZ)
 
 struct nicvf_cq_poll {
+   struct  nicvf *nicvf;
u8  cq_idx; /* Completion queue index */
struct  napi_struct napi;
 };
diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c 
b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
index de51828..2198f61 100644
--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
@@ -653,11 +653,20 @@ static void nicvf_handle_qs_err(unsigned long data)
nicvf_enable_intr(nic, NICVF_INTR_QS_ERR, 0);
 }
 
+static inline void nicvf_dump_intr_status(struct nicvf *nic)
+{
+   if (netif_msg_intr(nic))
+   netdev_info(nic-netdev, %s: interrupt status 0x%llx\n,
+   nic-netdev-name, nicvf_reg_read(nic, NIC_VF_INT));
+}
+
 static irqreturn_t nicvf_misc_intr_handler(int irq, void *nicvf_irq)
 {
struct nicvf *nic = (struct nicvf *)nicvf_irq;
u64 intr;
 
+   nicvf_dump_intr_status(nic);
+
intr = nicvf_reg_read(nic, NIC_VF_INT);
/* Check for spurious interrupt */
if (!(intr  NICVF_INTR_MBOX_MASK))
@@ -668,59 +677,58 @@ static irqreturn_t nicvf_misc_intr_handler(int irq, void 
*nicvf_irq)
return IRQ_HANDLED;
 }
 
-static irqreturn_t nicvf_intr_handler(int irq, void *nicvf_irq)
+static irqreturn_t nicvf_intr_handler(int irq, void *cq_irq)
+{
+   struct nicvf_cq_poll *cq_poll = (struct nicvf_cq_poll *)cq_irq;
+   struct nicvf *nic = cq_poll-nicvf;
+   int qidx = cq_poll-cq_idx;
+
+   nicvf_dump_intr_status(nic);
+
+   /* Disable interrupts */
+   nicvf_disable_intr(nic, NICVF_INTR_CQ, qidx);
+
+   /* Schedule NAPI */
+   napi_schedule(cq_poll-napi);
+
+   /* Clear interrupt */
+   nicvf_clear_intr(nic, NICVF_INTR_CQ, qidx);
+
+   return IRQ_HANDLED;
+}
+
+static irqreturn_t nicvf_rbdr_intr_handler(int irq, void *nicvf_irq)
 {
-   u64 qidx, intr, clear_intr = 0;
-   u64 cq_intr, rbdr_intr, qs_err_intr;
struct nicvf *nic = (struct nicvf *)nicvf_irq;
-   struct queue_set *qs = nic-qs;
-   struct nicvf_cq_poll *cq_poll = NULL;
+   u8 qidx;
 
-   intr = nicvf_reg_read(nic, NIC_VF_INT);
-   if (netif_msg_intr(nic))
-   netdev_info(nic-netdev, %s: interrupt status 0x%llx\n,
-   nic-netdev-name, intr);
-
-   qs_err_intr = intr  NICVF_INTR_QS_ERR_MASK;
-   if (qs_err_intr) {
-   /* Disable Qset err interrupt and schedule softirq */
-   nicvf_disable_intr(nic, NICVF_INTR_QS_ERR, 0);
-   tasklet_hi_schedule(nic-qs_err_task);
-   clear_intr |= qs_err_intr;
-   }
 
-   /* Disable interrupts and start polling */
-   cq_intr = (intr  NICVF_INTR_CQ_MASK)  NICVF_INTR_CQ_SHIFT;
-   for (qidx = 0; qidx  qs-cq_cnt; qidx++) {
-   if (!(cq_intr  (1  qidx)))
-   continue;
-   if (!nicvf_is_intr_enabled(nic, NICVF_INTR_CQ, qidx))
+   nicvf_dump_intr_status(nic);
+
+   /* Disable RBDR interrupt and schedule softirq */
+   for (qidx = 0; qidx  nic-qs-rbdr_cnt; qidx++) {
+   if (!nicvf_is_intr_enabled(nic, NICVF_INTR_RBDR, qidx))
continue;
+   nicvf_disable_intr(nic, NICVF_INTR_RBDR, qidx);
+   tasklet_hi_schedule(nic-rbdr_task);
+   /* Clear interrupt */
+   nicvf_clear_intr(nic, NICVF_INTR_RBDR, qidx);
+   }
 
-   nicvf_disable_intr(nic, NICVF_INTR_CQ, qidx);
-   clear_intr |= ((1  qidx)  NICVF_INTR_CQ_SHIFT);
+   return IRQ_HANDLED;
+}
 
-   cq_poll = nic-napi[qidx];
-   /* Schedule NAPI */
-   if (cq_poll)
-   napi_schedule(cq_poll-napi);
-   }
+static irqreturn_t nicvf_qs_err_intr_handler(int irq, void *nicvf_irq)
+{
+   struct nicvf *nic = (struct nicvf *)nicvf_irq;
 
-   /* Handle RBDR interrupts */
-   rbdr_intr = (intr  NICVF_INTR_RBDR_MASK)  NICVF_INTR_RBDR_SHIFT;
-   if 

Re: [PATCH net-next 6/8] net: thunderx: Rework interrupt handler

2015-08-28 Thread Alexey Klimov
Hi Aleksey,

let me add few minor points below.

On Fri, Aug 28, 2015 at 5:59 PM, Aleksey Makarov
aleksey.maka...@auriga.com wrote:
 From: Sunil Goutham sgout...@cavium.com

 Rework interrupt handler to avoid checking IRQ affinity of
 CQ interrupts. Now separate handlers are registered for each IRQ
 including RBDR. Also register interrupt handlers for only those
 which are being used.

Also add nicvf_dump_intr_status() and use it in irq handler(s).
I suggest to check and extend commit message and think about commit
name. Maybe net: thunderx: rework interrupt handling and
registration at least?

Please consider possibility of splitting this patch into few patches too.


 Signed-off-by: Sunil Goutham sgout...@cavium.com
 Signed-off-by: Aleksey Makarov aleksey.maka...@caviumnetworks.com
 ---
  drivers/net/ethernet/cavium/thunder/nic.h  |   1 +
  drivers/net/ethernet/cavium/thunder/nicvf_main.c   | 172 
 -
  drivers/net/ethernet/cavium/thunder/nicvf_queues.h |   2 +
  3 files changed, 103 insertions(+), 72 deletions(-)

 diff --git a/drivers/net/ethernet/cavium/thunder/nic.h 
 b/drivers/net/ethernet/cavium/thunder/nic.h
 index a83f567..89b997e 100644
 --- a/drivers/net/ethernet/cavium/thunder/nic.h
 +++ b/drivers/net/ethernet/cavium/thunder/nic.h
 @@ -135,6 +135,7 @@
  #defineNICVF_TX_TIMEOUT(50 * HZ)

  struct nicvf_cq_poll {
 +   struct  nicvf *nicvf;
 u8  cq_idx; /* Completion queue index */
 struct  napi_struct napi;
  };
 diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c 
 b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
 index de51828..2198f61 100644
 --- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
 +++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
 @@ -653,11 +653,20 @@ static void nicvf_handle_qs_err(unsigned long data)
 nicvf_enable_intr(nic, NICVF_INTR_QS_ERR, 0);
  }

 +static inline void nicvf_dump_intr_status(struct nicvf *nic)
 +{
 +   if (netif_msg_intr(nic))
 +   netdev_info(nic-netdev, %s: interrupt status 0x%llx\n,
 +   nic-netdev-name, nicvf_reg_read(nic, 
 NIC_VF_INT));
 +}

Please check if you really need to mark this 'inline' here.

  static irqreturn_t nicvf_misc_intr_handler(int irq, void *nicvf_irq)
  {
 struct nicvf *nic = (struct nicvf *)nicvf_irq;
 u64 intr;

 +   nicvf_dump_intr_status(nic);
 +
 intr = nicvf_reg_read(nic, NIC_VF_INT);
 /* Check for spurious interrupt */
 if (!(intr  NICVF_INTR_MBOX_MASK))
 @@ -668,59 +677,58 @@ static irqreturn_t nicvf_misc_intr_handler(int irq, 
 void *nicvf_irq)
 return IRQ_HANDLED;
  }

 -static irqreturn_t nicvf_intr_handler(int irq, void *nicvf_irq)
 +static irqreturn_t nicvf_intr_handler(int irq, void *cq_irq)
 +{
 +   struct nicvf_cq_poll *cq_poll = (struct nicvf_cq_poll *)cq_irq;
 +   struct nicvf *nic = cq_poll-nicvf;
 +   int qidx = cq_poll-cq_idx;
 +
 +   nicvf_dump_intr_status(nic);
 +
 +   /* Disable interrupts */
 +   nicvf_disable_intr(nic, NICVF_INTR_CQ, qidx);
 +
 +   /* Schedule NAPI */
 +   napi_schedule(cq_poll-napi);
 +
 +   /* Clear interrupt */
 +   nicvf_clear_intr(nic, NICVF_INTR_CQ, qidx);
 +
 +   return IRQ_HANDLED;
 +}

You're not considering spurious irqs in all new irq handlers here and
below and schedule napi/tasklets unconditionally. Is it correct?
For me it looks like previous implementation relied on reading of
NIC_VF_INT to understand irq type and what actions should be
performed. It generally had idea that no interrupt might occur.


 +
 +static irqreturn_t nicvf_rbdr_intr_handler(int irq, void *nicvf_irq)
  {
 -   u64 qidx, intr, clear_intr = 0;
 -   u64 cq_intr, rbdr_intr, qs_err_intr;
 struct nicvf *nic = (struct nicvf *)nicvf_irq;
 -   struct queue_set *qs = nic-qs;
 -   struct nicvf_cq_poll *cq_poll = NULL;
 +   u8 qidx;

 -   intr = nicvf_reg_read(nic, NIC_VF_INT);
 -   if (netif_msg_intr(nic))
 -   netdev_info(nic-netdev, %s: interrupt status 0x%llx\n,
 -   nic-netdev-name, intr);
 -
 -   qs_err_intr = intr  NICVF_INTR_QS_ERR_MASK;
 -   if (qs_err_intr) {
 -   /* Disable Qset err interrupt and schedule softirq */
 -   nicvf_disable_intr(nic, NICVF_INTR_QS_ERR, 0);
 -   tasklet_hi_schedule(nic-qs_err_task);
 -   clear_intr |= qs_err_intr;
 -   }

 -   /* Disable interrupts and start polling */
 -   cq_intr = (intr  NICVF_INTR_CQ_MASK)  NICVF_INTR_CQ_SHIFT;
 -   for (qidx = 0; qidx  qs-cq_cnt; qidx++) {
 -   if (!(cq_intr  (1  qidx)))
 -   continue;
 -   if (!nicvf_is_intr_enabled(nic, NICVF_INTR_CQ, qidx))
 +   nicvf_dump_intr_status(nic);
 +
 +   /* Disable RBDR interrupt and schedule softirq */
 +   for (qidx = 0; qidx  nic-qs-rbdr_cnt; qidx++) {
 +   

Re: [PATCH net-next 6/8] net: thunderx: Rework interrupt handler

2015-08-28 Thread David Miller
From: Aleksey Makarov aleksey.maka...@auriga.com
Date: Fri, 28 Aug 2015 17:59:58 +0300

 @@ -251,6 +251,8 @@ struct cmp_queue {
   void*desc;
   struct q_desc_mem   dmem;
   struct cmp_queue_stats  stats;
 + int irq;
 + cpumask_t   affinity_mask;
  } cacheline_aligned_in_smp;
  
  struct snd_queue {

This new affinity_mask member is not used, please remove it and respin
this patch series.

Thanks.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/