Re: [PATCH 4/4] genirq: use irq's affinity for threaded irq with IRQF_RESCUE_THREAD

2019-09-06 Thread John Garry

On 27/08/2019 09:53, Ming Lei wrote:

In case of IRQF_RESCUE_THREAD, the threaded handler is only used to
handle interrupt when IRQ flood comes, use irq's affinity for this thread
so that scheduler may select other not too busy CPUs for handling the
interrupt.

Cc: Long Li 
Cc: Ingo Molnar ,
Cc: Peter Zijlstra 
Cc: Keith Busch 
Cc: Jens Axboe 
Cc: Christoph Hellwig 
Cc: Sagi Grimberg 
Cc: John Garry 
Cc: Thomas Gleixner 
Cc: Hannes Reinecke 
Cc: linux-n...@lists.infradead.org
Cc: linux-s...@vger.kernel.org
Signed-off-by: Ming Lei 




---
 kernel/irq/manage.c | 13 -
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 1566abbf50e8..03bc041348b7 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -968,7 +968,18 @@ irq_thread_check_affinity(struct irq_desc *desc, struct 
irqaction *action)
if (cpumask_available(desc->irq_common_data.affinity)) {
const struct cpumask *m;

-   m = irq_data_get_effective_affinity_mask(>irq_data);
+   /*
+* Managed IRQ's affinity is setup gracefull on MUNA locality,


gracefully


+* also if IRQF_RESCUE_THREAD is set, interrupt flood has been
+* triggered, so ask scheduler to run the thread on CPUs
+* specified by this interrupt's affinity.
+*/


Hi Ming,


+   if ((action->flags & IRQF_RESCUE_THREAD) &&
+   irqd_affinity_is_managed(>irq_data))


This doesn't look to solve the other issue I reported - that being that 
we handle the interrupt in a threaded handler natively, and the hard 
irq+threaded handler fully occupies the cpu, limiting throughput.


So can we expand the scope to cover that scenario also? I don't think 
that it’s right to solve that separately. So if we're continuing this 
approach, can we add separate judgment for spreading the cpumask for the 
threaded part?


Thanks,
John


+   m = desc->irq_common_data.affinity;
+   else
+   m = irq_data_get_effective_affinity_mask(
+   >irq_data);
cpumask_copy(mask, m);
} else {
valid = false;






Re: [PATCH 4/4] genirq: use irq's affinity for threaded irq with IRQF_RESCUE_THREAD

2019-08-27 Thread Keith Busch
On Tue, Aug 27, 2019 at 04:53:44PM +0800, Ming Lei wrote:
> In case of IRQF_RESCUE_THREAD, the threaded handler is only used to
> handle interrupt when IRQ flood comes, use irq's affinity for this thread
> so that scheduler may select other not too busy CPUs for handling the
> interrupt.
> 
> Cc: Long Li 
> Cc: Ingo Molnar ,
> Cc: Peter Zijlstra 
> Cc: Keith Busch 
> Cc: Jens Axboe 
> Cc: Christoph Hellwig 
> Cc: Sagi Grimberg 
> Cc: John Garry 
> Cc: Thomas Gleixner 
> Cc: Hannes Reinecke 
> Cc: linux-n...@lists.infradead.org
> Cc: linux-s...@vger.kernel.org
> Signed-off-by: Ming Lei 
> ---
>  kernel/irq/manage.c | 13 -
>  1 file changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index 1566abbf50e8..03bc041348b7 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -968,7 +968,18 @@ irq_thread_check_affinity(struct irq_desc *desc, struct 
> irqaction *action)
>   if (cpumask_available(desc->irq_common_data.affinity)) {
>   const struct cpumask *m;
>  
> - m = irq_data_get_effective_affinity_mask(>irq_data);
> + /*
> +  * Managed IRQ's affinity is setup gracefull on MUNA locality,

s/MUNA/NUMA

> +  * also if IRQF_RESCUE_THREAD is set, interrupt flood has been
> +  * triggered, so ask scheduler to run the thread on CPUs
> +  * specified by this interrupt's affinity.
> +  */
> + if ((action->flags & IRQF_RESCUE_THREAD) &&
> + irqd_affinity_is_managed(>irq_data))
> + m = desc->irq_common_data.affinity;
> + else
> + m = irq_data_get_effective_affinity_mask(
> + >irq_data);
>   cpumask_copy(mask, m);
>   } else {
>   valid = false;
> -- 


[PATCH 4/4] genirq: use irq's affinity for threaded irq with IRQF_RESCUE_THREAD

2019-08-27 Thread Ming Lei
In case of IRQF_RESCUE_THREAD, the threaded handler is only used to
handle interrupt when IRQ flood comes, use irq's affinity for this thread
so that scheduler may select other not too busy CPUs for handling the
interrupt.

Cc: Long Li 
Cc: Ingo Molnar ,
Cc: Peter Zijlstra 
Cc: Keith Busch 
Cc: Jens Axboe 
Cc: Christoph Hellwig 
Cc: Sagi Grimberg 
Cc: John Garry 
Cc: Thomas Gleixner 
Cc: Hannes Reinecke 
Cc: linux-n...@lists.infradead.org
Cc: linux-s...@vger.kernel.org
Signed-off-by: Ming Lei 
---
 kernel/irq/manage.c | 13 -
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 1566abbf50e8..03bc041348b7 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -968,7 +968,18 @@ irq_thread_check_affinity(struct irq_desc *desc, struct 
irqaction *action)
if (cpumask_available(desc->irq_common_data.affinity)) {
const struct cpumask *m;
 
-   m = irq_data_get_effective_affinity_mask(>irq_data);
+   /*
+* Managed IRQ's affinity is setup gracefull on MUNA locality,
+* also if IRQF_RESCUE_THREAD is set, interrupt flood has been
+* triggered, so ask scheduler to run the thread on CPUs
+* specified by this interrupt's affinity.
+*/
+   if ((action->flags & IRQF_RESCUE_THREAD) &&
+   irqd_affinity_is_managed(>irq_data))
+   m = desc->irq_common_data.affinity;
+   else
+   m = irq_data_get_effective_affinity_mask(
+   >irq_data);
cpumask_copy(mask, m);
} else {
valid = false;
-- 
2.20.1