From: Thomas Gleixner
Interupts marked with this flag are excluded from user space interrupt
affinity changes. Contrary to the IRQ_NO_BALANCING flag, the kernel internal
affinity mechanism is not blocked.
This flag will be used for multi-queue device interrupts.
Signed-off-by: Thomas Gleixner
On Thu, Jun 16, 2016 at 11:19:51AM -0400, Keith Busch wrote:
> On Wed, Jun 15, 2016 at 10:50:53PM +0200, Bart Van Assche wrote:
> > Does it matter on x86 systems whether or not these interrupt vectors are
> > also associated with a CPU with a higher CPU number? Although multiple bits
> > can be set
On Mon, Jun 20, 2016 at 03:21:47PM +0200, Bart Van Assche wrote:
> A notification mechanism that reports interrupt mapping changes will
> definitely help. What would also help is an API that allows drivers to
> query the MSI-X IRQ of an adapter that is nearest given a cpumask, e.g.
> hctx->cpuma
On 06/20/2016 02:22 PM, Christoph Hellwig wrote:
On Thu, Jun 16, 2016 at 05:39:07PM +0200, Bart Van Assche wrote:
On 06/16/2016 05:20 PM, Christoph Hellwig wrote:
On Wed, Jun 15, 2016 at 09:36:54PM +0200, Bart Van Assche wrote:
My concern
is that I doubt that there is an interrupt assignment s
On Thu, Jun 16, 2016 at 05:39:07PM +0200, Bart Van Assche wrote:
> On 06/16/2016 05:20 PM, Christoph Hellwig wrote:
>> On Wed, Jun 15, 2016 at 09:36:54PM +0200, Bart Van Assche wrote:
>>> Do you agree that - ignoring other interrupt assignments - that the latter
>>> interrupt assignment scheme woul
On 06/16/2016 05:20 PM, Christoph Hellwig wrote:
On Wed, Jun 15, 2016 at 09:36:54PM +0200, Bart Van Assche wrote:
Do you agree that - ignoring other interrupt assignments - that the latter
interrupt assignment scheme would result in higher throughput and lower
interrupt processing latency?
Pro
On Wed, Jun 15, 2016 at 09:36:54PM +0200, Bart Van Assche wrote:
> Do you agree that - ignoring other interrupt assignments - that the latter
> interrupt assignment scheme would result in higher throughput and lower
> interrupt processing latency?
Probably. Once we've got it in the core IRQ cod
On Wed, Jun 15, 2016 at 10:50:53PM +0200, Bart Van Assche wrote:
> Does it matter on x86 systems whether or not these interrupt vectors are
> also associated with a CPU with a higher CPU number? Although multiple bits
> can be set in /proc/irq//smp_affinity only the first bit counts on x86
> platfo
On 06/14/2016 09:58 PM, Christoph Hellwig wrote:
diff --git a/include/linux/irq.h b/include/linux/irq.h
index 4d758a7..49d66d1 100644
--- a/include/linux/irq.h
+++ b/include/linux/irq.h
@@ -197,6 +197,7 @@ struct irq_data {
* IRQD_IRQ_INPROGRESS - In progress state of the interrupt
*
On 06/15/2016 10:12 PM, Keith Busch wrote:
On Wed, Jun 15, 2016 at 04:06:55PM -0400, Keith Busch wrote:
0: A0 B0
1: A1 B1
2: A2 B2
3: A3 B3
4: A4 B4
5: A5 B5
6: A6 B6
7: A7 B7
8: (none)
...
31: (none)
I'll need to look at the follow on patches do to confirm, but that's
not what this should do
On Wed, Jun 15, 2016 at 04:06:55PM -0400, Keith Busch wrote:
> >
> > 0: A0 B0
> > 1: A1 B1
> > 2: A2 B2
> > 3: A3 B3
> > 4: A4 B4
> > 5: A5 B5
> > 6: A6 B6
> > 7: A7 B7
> > 8: (none)
> > ...
> > 31: (none)
>
> I'll need to look at the follow on patches do to confirm, but that's
> not what this sh
On Wed, Jun 15, 2016 at 09:36:54PM +0200, Bart Van Assche wrote:
> Sorry that I had not yet this made this clear but my concern is about a
> system equipped with two or more adapters and with more CPU cores than the
> number of MSI-X interrupts per adapter. Consider e.g. a system with two
> adapter
On 06/15/2016 06:03 PM, Keith Busch wrote:
On Wed, Jun 15, 2016 at 05:28:54PM +0200, Bart Van Assche wrote:
On 06/15/2016 05:14 PM, Keith Busch wrote:
I think the idea is have the irq_affinity mask match the CPU mapping on
the submission side context associated with that particular vector. If
t
On Wed, Jun 15, 2016 at 05:28:54PM +0200, Bart Van Assche wrote:
> On 06/15/2016 05:14 PM, Keith Busch wrote:
> >I think the idea is have the irq_affinity mask match the CPU mapping on
> >the submission side context associated with that particular vector. If
> >two identical adapters generate the s
On 06/15/2016 05:14 PM, Keith Busch wrote:
On Wed, Jun 15, 2016 at 12:42:53PM +0200, Bart Van Assche wrote:
If two identical adapters are present in a system, will these generate the
same irq_affinity mask? Do you agree that interrupt vectors from different
adapters should be assigned to differe
On Wed, Jun 15, 2016 at 12:42:53PM +0200, Bart Van Assche wrote:
> Today irqbalanced is responsible for deciding how to assign interrupts from
> different adapters to CPU cores. Does the above mean that for adapters that
> support multiple MSI-X interrupts the kernel will have full responsibility
>
On 06/15/2016 12:23 PM, Christoph Hellwig wrote:
Hi Bart,
On Wed, Jun 15, 2016 at 10:44:37AM +0200, Bart Van Assche wrote:
However, is excluding these interrupts from irqbalanced really the
way to go?
What positive effect will irqbalanced have on explcititly spread
interrupts?
Suppose e.g.
Hi Bart,
On Wed, Jun 15, 2016 at 10:44:37AM +0200, Bart Van Assche wrote:
> However, is excluding these interrupts from irqbalanced really the
> way to go?
What positive effect will irqbalanced have on explcititly spread
interrupts?
> Suppose e.g. that a system is equipped with two RDMA adapter
On 06/14/2016 09:58 PM, Christoph Hellwig wrote:
From: Thomas Gleixner
Interupts marked with this flag are excluded from user space interrupt
affinity changes. Contrary to the IRQ_NO_BALANCING flag, the kernel internal
affinity mechanism is not blocked.
This flag will be used for multi-queue d
From: Thomas Gleixner
Interupts marked with this flag are excluded from user space interrupt
affinity changes. Contrary to the IRQ_NO_BALANCING flag, the kernel internal
affinity mechanism is not blocked.
This flag will be used for multi-queue device interrupts.
Signed-off-by: Thomas Gleixner
20 matches
Mail list logo