Re: [PATCH net 5/6] net: mvneta: The mvneta_percpu_elect function should be atomic

2016-02-01 Thread Gregory CLEMENT
Hi David,
 
 On sam., janv. 30 2016, David Miller  wrote:

> From: Gregory CLEMENT 
> Date: Fri, 29 Jan 2016 17:26:06 +0100
>
>> @@ -370,6 +370,8 @@ struct mvneta_port {
>>  struct net_device *dev;
>>  struct notifier_block cpu_notifier;
>>  int rxq_def;
>> +/* protect  */
>> +spinlock_t lock;
>>  
>>  /* Core clock */
>>  struct clk *clk;
>
> Protect what?  This comment needs a lot of improvement.

Sorry about it, this was a left-over.

>
> Everyone knows a spinlock "protects" things, so if you aren't going
> to actually describe what this lock protects, and in what contexts
> the lock is used, you might as well not say anything at all.

I can only agree with you, I will fix it for next version.

Thanks,

Gregory


-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com


Re: [PATCH net 5/6] net: mvneta: The mvneta_percpu_elect function should be atomic

2016-01-29 Thread David Miller
From: Gregory CLEMENT 
Date: Fri, 29 Jan 2016 17:26:06 +0100

> @@ -370,6 +370,8 @@ struct mvneta_port {
>   struct net_device *dev;
>   struct notifier_block cpu_notifier;
>   int rxq_def;
> + /* protect  */
> + spinlock_t lock;
>  
>   /* Core clock */
>   struct clk *clk;

Protect what?  This comment needs a lot of improvement.

Everyone knows a spinlock "protects" things, so if you aren't going
to actually describe what this lock protects, and in what contexts
the lock is used, you might as well not say anything at all.


[PATCH net 5/6] net: mvneta: The mvneta_percpu_elect function should be atomic

2016-01-29 Thread Gregory CLEMENT
Electing a CPU must be done in an atomic way: it should be done after or
before the removal/insertion of a CPU and this function is not reentrant.

During the loop of mvneta_percpu_elect we associates the queues to the
CPUs, if there is a topology change during this loop, then the mapping
between the CPUs and the queues could be wrong. During this loop the
interrupt mask is also updating for each CPUs, It should not be changed
in the same time by other part of the driver.

This patch adds spinlock to create the needed critical sections.

Signed-off-by: Gregory CLEMENT 
---
 drivers/net/ethernet/marvell/mvneta.c | 13 +
 1 file changed, 13 insertions(+)

diff --git a/drivers/net/ethernet/marvell/mvneta.c 
b/drivers/net/ethernet/marvell/mvneta.c
index 1ed813d478e8..3358c9a70467 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -370,6 +370,8 @@ struct mvneta_port {
struct net_device *dev;
struct notifier_block cpu_notifier;
int rxq_def;
+   /* protect  */
+   spinlock_t lock;
 
/* Core clock */
struct clk *clk;
@@ -2855,6 +2857,11 @@ static void mvneta_percpu_elect(struct mvneta_port *pp)
 {
int online_cpu_idx, max_cpu, cpu, i = 0;
 
+   /* Electing a CPU must done in an atomic way: it should be
+* done after or before the removal/insertion of a CPU and
+* this function is not reentrant.
+*/
+   spin_lock(&pp->lock);
online_cpu_idx = pp->rxq_def % num_online_cpus();
max_cpu = num_present_cpus();
 
@@ -2893,6 +2900,7 @@ static void mvneta_percpu_elect(struct mvneta_port *pp)
i++;
 
}
+   spin_unlock(&pp->lock);
 };
 
 static int mvneta_percpu_notifier(struct notifier_block *nfb,
@@ -2947,8 +2955,13 @@ static int mvneta_percpu_notifier(struct notifier_block 
*nfb,
case CPU_DOWN_PREPARE:
case CPU_DOWN_PREPARE_FROZEN:
netif_tx_stop_all_queues(pp->dev);
+   /* Thanks to this lock we are sure that any pending
+* cpu election is done
+*/
+   spin_lock(&pp->lock);
/* Mask all ethernet port interrupts */
on_each_cpu(mvneta_percpu_mask_interrupt, pp, true);
+   spin_unlock(&pp->lock);
 
napi_synchronize(&port->napi);
napi_disable(&port->napi);
-- 
2.5.0