Re: [PATCH 6/6] net: mvneta: Statically assign queues to CPUs

2015-07-06 Thread Maxime Ripard
On Sun, Jul 05, 2015 at 03:00:11PM +0200, Willy Tarreau wrote:
 Hi Thomas,
 
 On Fri, Jul 03, 2015 at 04:46:24PM +0200, Thomas Petazzoni wrote:
  Maxime,
  
  On Fri,  3 Jul 2015 16:25:51 +0200, Maxime Ripard wrote:
  
   +static void mvneta_percpu_enable(void *arg)
   +{
   + struct mvneta_port *pp = arg;
   +
   + enable_percpu_irq(pp-dev-irq, IRQ_TYPE_NONE);
   +}
   +
static int mvneta_open(struct net_device *dev)
{
 struct mvneta_port *pp = netdev_priv(dev);
   @@ -2655,6 +2662,19 @@ static int mvneta_open(struct net_device *dev)
 goto err_cleanup_txqs;
 }

   + /*
   +  * Even though the documentation says that request_percpu_irq
   +  * doesn't enable the interrupts automatically, it actually
   +  * does so on the local CPU.
   +  *
   +  * Make sure it's disabled.
   +  */
   + disable_percpu_irq(pp-dev-irq);
   +
   + /* Enable per-CPU interrupt on the one CPU we care about */
   + smp_call_function_single(rxq_def % num_online_cpus(),
   +  mvneta_percpu_enable, pp, true);
  
  What happens if that CPU goes offline through CPU hotplug?
 
 I just tried : if I start mvneta with rxq_def=1, then my irq runs
 on CPU1. Then I offline CPU1 and the irqs are automatically handled
 by CPU0.  Then I online CPU1 and irqs stay on CPU0.

I'm confused, I would have thought that it wouldn't even work.

I guess it can be easily solved with a cpu_notifier though.

 More or less related, I found that if I enable a queue number larger
 than the CPU count it does work, but then the system complains
 during rmmod :
 
 [  877.146203] [ cut here ]
 [  877.146227] WARNING: CPU: 1 PID: 1731 at fs/proc/generic.c:552 
 remove_proc_entry+0x144/0x15c()
 [  877.146233] remove_proc_entry: removing non-empty directory 'irq/29', 
 leaking at least 'mvneta'
 [  877.146238] Modules linked in: mvneta(-) [last unloaded: mvneta]
 [  877.146254] CPU: 1 PID: 1731 Comm: rmmod Tainted: GW   
 4.1.1-mvebu-6-g3d317ed-dirty #5
 [  877.146260] Hardware name: Marvell Armada 370/XP (Device Tree)
 [  877.146281] [c0017194] (unwind_backtrace) from [c00126d4] 
 (show_stack+0x10/0x14)
 [  877.146293] [c00126d4] (show_stack) from [c04d32e4] 
 (dump_stack+0x74/0x90)
 [  877.146305] [c04d32e4] (dump_stack) from [c0025200] 
 (warn_slowpath_common+0x74/0xb0)
 [  877.146315] [c0025200] (warn_slowpath_common) from [c00252d0] 
 (warn_slowpath_fmt+0x30/0x40)
 [  877.146325] [c00252d0] (warn_slowpath_fmt) from [c0115694] 
 (remove_proc_entry+0x144/0x15c)
 [  877.146336] [c0115694] (remove_proc_entry) from [c0060fd4] 
 (unregister_irq_proc+0x8c/0xb0)
 [  877.146347] [c0060fd4] (unregister_irq_proc) from [c0059f84] 
 (free_desc+0x28/0x58)
 [  877.146356] [c0059f84] (free_desc) from [c005a028] 
 (irq_free_descs+0x44/0x80)
 [  877.146368] [c005a028] (irq_free_descs) from [bf015748] 
 (mvneta_remove+0x3c/0x4c [mvneta])
 [  877.146382] [bf015748] (mvneta_remove [mvneta]) from [c024d6e8] 
 (platform_drv_remove+0x18/0x30)
 [  877.146393] [c024d6e8] (platform_drv_remove) from [c024bd50] 
 (__device_release_driver+0x70/0xe4)
 [  877.146402] [c024bd50] (__device_release_driver) from [c024c5b8] 
 (driver_detach+0xcc/0xd0)
 [  877.146411] [c024c5b8] (driver_detach) from [c024bbe0] 
 (bus_remove_driver+0x4c/0x90)
 [  877.146425] [c024bbe0] (bus_remove_driver) from [c007d3f0] 
 (SyS_delete_module+0x164/0x1b4)
 [  877.146437] [c007d3f0] (SyS_delete_module) from [c000f4c0] 
 (ret_fast_syscall+0x0/0x3c)
 [  877.146443] ---[ end trace 48713a9ae31204b1 ]---
 
 This was on the AX3 (dual-proc) with rxq_def=2.

Hmmm, interesting, I'll look into it, thanks!

Maxime

-- 
Maxime Ripard, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com


signature.asc
Description: Digital signature


Re: [PATCH 6/6] net: mvneta: Statically assign queues to CPUs

2015-07-05 Thread Willy Tarreau
Hi Thomas,

On Fri, Jul 03, 2015 at 04:46:24PM +0200, Thomas Petazzoni wrote:
 Maxime,
 
 On Fri,  3 Jul 2015 16:25:51 +0200, Maxime Ripard wrote:
 
  +static void mvneta_percpu_enable(void *arg)
  +{
  +   struct mvneta_port *pp = arg;
  +
  +   enable_percpu_irq(pp-dev-irq, IRQ_TYPE_NONE);
  +}
  +
   static int mvneta_open(struct net_device *dev)
   {
  struct mvneta_port *pp = netdev_priv(dev);
  @@ -2655,6 +2662,19 @@ static int mvneta_open(struct net_device *dev)
  goto err_cleanup_txqs;
  }
   
  +   /*
  +* Even though the documentation says that request_percpu_irq
  +* doesn't enable the interrupts automatically, it actually
  +* does so on the local CPU.
  +*
  +* Make sure it's disabled.
  +*/
  +   disable_percpu_irq(pp-dev-irq);
  +
  +   /* Enable per-CPU interrupt on the one CPU we care about */
  +   smp_call_function_single(rxq_def % num_online_cpus(),
  +mvneta_percpu_enable, pp, true);
 
 What happens if that CPU goes offline through CPU hotplug?

I just tried : if I start mvneta with rxq_def=1, then my irq runs on
CPU1. Then I offline CPU1 and the irqs are automatically handled by CPU0.
Then I online CPU1 and irqs stay on CPU0.

More or less related, I found that if I enable a queue number larger than
the CPU count it does work, but then the system complains during rmmod :

[  877.146203] [ cut here ]
[  877.146227] WARNING: CPU: 1 PID: 1731 at fs/proc/generic.c:552 
remove_proc_entry+0x144/0x15c()
[  877.146233] remove_proc_entry: removing non-empty directory 'irq/29', 
leaking at least 'mvneta'
[  877.146238] Modules linked in: mvneta(-) [last unloaded: mvneta]
[  877.146254] CPU: 1 PID: 1731 Comm: rmmod Tainted: GW   
4.1.1-mvebu-6-g3d317ed-dirty #5
[  877.146260] Hardware name: Marvell Armada 370/XP (Device Tree)
[  877.146281] [c0017194] (unwind_backtrace) from [c00126d4] 
(show_stack+0x10/0x14)
[  877.146293] [c00126d4] (show_stack) from [c04d32e4] 
(dump_stack+0x74/0x90)
[  877.146305] [c04d32e4] (dump_stack) from [c0025200] 
(warn_slowpath_common+0x74/0xb0)
[  877.146315] [c0025200] (warn_slowpath_common) from [c00252d0] 
(warn_slowpath_fmt+0x30/0x40)
[  877.146325] [c00252d0] (warn_slowpath_fmt) from [c0115694] 
(remove_proc_entry+0x144/0x15c)
[  877.146336] [c0115694] (remove_proc_entry) from [c0060fd4] 
(unregister_irq_proc+0x8c/0xb0)
[  877.146347] [c0060fd4] (unregister_irq_proc) from [c0059f84] 
(free_desc+0x28/0x58)
[  877.146356] [c0059f84] (free_desc) from [c005a028] 
(irq_free_descs+0x44/0x80)
[  877.146368] [c005a028] (irq_free_descs) from [bf015748] 
(mvneta_remove+0x3c/0x4c [mvneta])
[  877.146382] [bf015748] (mvneta_remove [mvneta]) from [c024d6e8] 
(platform_drv_remove+0x18/0x30)
[  877.146393] [c024d6e8] (platform_drv_remove) from [c024bd50] 
(__device_release_driver+0x70/0xe4)
[  877.146402] [c024bd50] (__device_release_driver) from [c024c5b8] 
(driver_detach+0xcc/0xd0)
[  877.146411] [c024c5b8] (driver_detach) from [c024bbe0] 
(bus_remove_driver+0x4c/0x90)
[  877.146425] [c024bbe0] (bus_remove_driver) from [c007d3f0] 
(SyS_delete_module+0x164/0x1b4)
[  877.146437] [c007d3f0] (SyS_delete_module) from [c000f4c0] 
(ret_fast_syscall+0x0/0x3c)
[  877.146443] ---[ end trace 48713a9ae31204b1 ]---

This was on the AX3 (dual-proc) with rxq_def=2.

Hoping this helps,
Willy

--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 6/6] net: mvneta: Statically assign queues to CPUs

2015-07-03 Thread Thomas Petazzoni
Maxime,

On Fri,  3 Jul 2015 16:25:51 +0200, Maxime Ripard wrote:

 +static void mvneta_percpu_enable(void *arg)
 +{
 + struct mvneta_port *pp = arg;
 +
 + enable_percpu_irq(pp-dev-irq, IRQ_TYPE_NONE);
 +}
 +
  static int mvneta_open(struct net_device *dev)
  {
   struct mvneta_port *pp = netdev_priv(dev);
 @@ -2655,6 +2662,19 @@ static int mvneta_open(struct net_device *dev)
   goto err_cleanup_txqs;
   }
  
 + /*
 +  * Even though the documentation says that request_percpu_irq
 +  * doesn't enable the interrupts automatically, it actually
 +  * does so on the local CPU.
 +  *
 +  * Make sure it's disabled.
 +  */
 + disable_percpu_irq(pp-dev-irq);
 +
 + /* Enable per-CPU interrupt on the one CPU we care about */
 + smp_call_function_single(rxq_def % num_online_cpus(),
 +  mvneta_percpu_enable, pp, true);

What happens if that CPU goes offline through CPU hotplug?

Thanks!

Thomas
-- 
Thomas Petazzoni, CTO, Free Electrons
Embedded Linux, Kernel and Android engineering
http://free-electrons.com
--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 6/6] net: mvneta: Statically assign queues to CPUs

2015-07-03 Thread Maxime Ripard
Since the switch to per-CPU interrupts, we lost the ability to set which
CPU was going to receive our RX interrupt, which was now only the CPU on
which the mvneta_open function was run.

We can now assign our queues to their respective CPUs, and make sure only
this CPU is going to handle our traffic.

This also paves the road to be able to change that at runtime, and later on
to support RSS.

Signed-off-by: Maxime Ripard maxime.rip...@free-electrons.com
---
 drivers/net/ethernet/marvell/mvneta.c | 20 
 1 file changed, 20 insertions(+)

diff --git a/drivers/net/ethernet/marvell/mvneta.c 
b/drivers/net/ethernet/marvell/mvneta.c
index 0d21b8a779d9..658d713abc18 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2630,6 +2630,13 @@ static void mvneta_mdio_remove(struct mvneta_port *pp)
pp-phy_dev = NULL;
 }
 
+static void mvneta_percpu_enable(void *arg)
+{
+   struct mvneta_port *pp = arg;
+
+   enable_percpu_irq(pp-dev-irq, IRQ_TYPE_NONE);
+}
+
 static int mvneta_open(struct net_device *dev)
 {
struct mvneta_port *pp = netdev_priv(dev);
@@ -2655,6 +2662,19 @@ static int mvneta_open(struct net_device *dev)
goto err_cleanup_txqs;
}
 
+   /*
+* Even though the documentation says that request_percpu_irq
+* doesn't enable the interrupts automatically, it actually
+* does so on the local CPU.
+*
+* Make sure it's disabled.
+*/
+   disable_percpu_irq(pp-dev-irq);
+
+   /* Enable per-CPU interrupt on the one CPU we care about */
+   smp_call_function_single(rxq_def % num_online_cpus(),
+mvneta_percpu_enable, pp, true);
+
/* In default link is down */
netif_carrier_off(pp-dev);
 
-- 
2.4.5

--
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html