MIng,
On Tue, 25 Jun 2019, Ming Lei wrote:
> On Mon, Jun 24, 2019 at 05:42:39PM +0200, Thomas Gleixner wrote:
> > On Mon, 24 Jun 2019, Weiping Zhang wrote:
> >
> > > The driver may implement multiple affinity set, and some of
> > > are empty, for this case we jus
On Mon, 24 Jun 2019, Weiping Zhang wrote:
> The driver may implement multiple affinity set, and some of
> are empty, for this case we just skip them.
Why? What's the point of creating empty sets? Just because is not a real
good justification.
Leaving the patch for Ming.
Thanks,
tglx
>
On Tue, 19 Feb 2019, 陈华才 wrote:
>
> I've tested, this patch can fix the nvme problem, but it can't be applied
> to 4.19 because of different context. And, I still think my original solution
> (genirq/affinity: Assign default affinity to pre/post vectors) is correct.
> There may be similar problems
Ming,
On Mon, 18 Feb 2019, Ming Lei wrote:
> On Sun, Feb 17, 2019 at 08:17:05PM +0100, Thomas Gleixner wrote:
> > I don't see how that would break blk-mq. The unmanaged set is not used by
> > the blk-mq stuff, that's some driver internal voodoo. So blk-mq still gets
&
On Sun, 17 Feb 2019, Ming Lei wrote:
> On Sat, Feb 16, 2019 at 06:13:13PM +0100, Thomas Gleixner wrote:
> > Some drivers need an extra set of interrupts which should not be marked
> > managed, but should get initial interrupt spreading.
>
> Could you share the drivers and
Fixed the kernel doc comments for struct
irq_affinity and de-'This patch'-ed the changelog ]
Signed-off-by: Ming Lei
Signed-off-by: Thomas Gleixner
---
drivers/pci/msi.c | 25 ++--
drivers/scsi/be2iscsi/be_main.c |2 -
include/linux/interrupt.h |
and the callsite
handling the ENOSPC situation.
Remove the now obsolete sanity checks and the related comments.
Signed-off-by: Thomas Gleixner
---
drivers/pci/msi.c | 14 --
1 file changed, 14 deletions(-)
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -1035,13 +1035,6
dropped adjustment of
number of sets ]
Signed-off-by: Ming Lei
Signed-off-by: Thomas Gleixner
---
drivers/nvme/host/pci.c | 116
1 file changed, 39 insertions(+), 77 deletions(-)
Index: b/drivers/nvme/host/pci.c
= {
.pre_vectors= 2,
.unmanaged_sets = 0x02,
.calc_sets = drv_calc_sets,
};
For both interrupt sets the interrupts are properly spread out, but the
second set is not marked managed.
Signed-off-by: Thomas Gleixner
Now that the NVME driver is converted over to the calc_set() callback, the
workarounds of the original set support can be removed.
Signed-off-by: Thomas Gleixner
---
kernel/irq/affinity.c | 20
1 file changed, 4 insertions(+), 16 deletions(-)
Index: b/kernel/irq
All information and calculations in the interrupt affinity spreading code
is strictly unsigned int. Though the code uses int all over the place.
Convert it over to unsigned int.
Signed-off-by: Thomas Gleixner
---
include/linux/interrupt.h | 20 +---
kernel/irq/affinity.c
and
de-'This patch'-ed the changelog ]
Signed-off-by: Ming Lei
Signed-off-by: Thomas Gleixner
---
drivers/nvme/host/pci.c |7 +++
include/linux/interrupt.h |9 ++---
kernel/irq/affinity.c | 16
3 files changed, 21 insertions(+), 11 deletio
This is the final update to the series with a few corner cases fixes
vs. V5 which can be found here:
https://lkml.kernel.org/r/20190214204755.819014...@linutronix.de
The series applies against:
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master
and is also available from:
On Fri, 15 Feb 2019, Thomas Gleixner wrote:
> On Fri, 15 Feb 2019, Ming Lei wrote:
> > > + * If only one interrupt is available, combine write and read
> > > + * queues. If 'write_queues' is set, ensure it leaves room for at
> > > + * least one read qu
On Fri, 15 Feb 2019, Thomas Gleixner wrote:
> On Fri, 15 Feb 2019, Marc Zyngier wrote:
> > > + */
> > > + if (nrirqs == 1)
> > > + nr_read_queues = 0;
> > > + else if (write_queues >= nrirqs)
> > > + nr_read_queues = nrirqs - 1;
&
On Fri, 15 Feb 2019, Marc Zyngier wrote:
> On Thu, 14 Feb 2019 20:47:59 +,
> Thomas Gleixner wrote:
> > drivers/nvme/host/pci.c | 108
> >
> > 1 file changed, 28 insertions(+), 80 deletions(-)
> >
> > ---
On Fri, 15 Feb 2019, Ming Lei wrote:
> > +* If only one interrupt is available, combine write and read
> > +* queues. If 'write_queues' is set, ensure it leaves room for at
> > +* least one read queue.
> > +*/
> > + if (nrirqs == 1)
> > + nr_read_queues = 0;
> > + else
This is a follow up on Ming's V4 patch series, which addresses the short
comings of multiple interrupt sets in the core code:
https://lkml.kernel.org/r/20190214122347.17372-1-ming@redhat.com
The series applies against:
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master
= {
.pre_vectors= 2,
.unmanaged_sets = 0x02,
.calc_sets = drv_calc_sets,
};
For both interrupt sets the interrupts are properly spread out, but the
second set is not marked managed.
Signed-off-by: Thomas Gleixner
and the callsite
handling the ENOSPC situation.
Remove the now obsolete sanity checks and the related comments.
Signed-off-by: Thomas Gleixner
---
drivers/pci/msi.c | 14 --
1 file changed, 14 deletions(-)
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -1035,13 +1035,6
imple case (no sets required). Moved the sanity check
for nr_sets after the invocation of the callback so it catches
broken drivers. Fixed the kernel doc comments for struct
irq_affinity and de-'This patch'-ed the changelog ]
Signed-off-by: Ming Lei
Signe
Now that the NVME driver is converted over to the calc_set() callback, the
workarounds of the original set support can be removed.
Signed-off-by: Thomas Gleixner
---
kernel/irq/affinity.c | 17 -
1 file changed, 4 insertions(+), 13 deletions(-)
--- a/kernel/irq/affinity.c
functional change.
Signed-off-by: Thomas Gleixner
---
kernel/irq/affinity.c | 18 --
1 file changed, 8 insertions(+), 10 deletions(-)
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -98,6 +98,7 @@ static int __irq_build_affinity_masks(co
dropped adjustment of
number of sets ]
Signed-off-by: Ming Lei
Signed-off-by: Thomas Gleixner
---
drivers/nvme/host/pci.c | 108
1 file changed, 28 insertions(+), 80 deletions(-)
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
All information and calculations in the interrupt affinity spreading code
is strictly unsigned int. Though the code uses int all over the place.
Convert it over to unsigned int.
Signed-off-by: Thomas Gleixner
---
include/linux/interrupt.h | 20 +---
kernel/irq/affinity.c
and
de-'This patch'-ed the changelog ]
Signed-off-by: Ming Lei
Signed-off-by: Thomas Gleixner
---
drivers/nvme/host/pci.c |7 +++
include/linux/interrupt.h |9 ++---
kernel/irq/affinity.c | 16
3 files changed, 21 insertions(+), 11 deletio
On Thu, 14 Feb 2019, Ming Lei wrote:
> The driver passes initial configuration for the interrupt allocation via
> a pointer to struct affinity_desc.
Btw, blindly copying a suggestion without proof reading is a bad idea. That
want's to be 'struct irq_affinity' obviously. Tired brain confused them
y
On Thu, 14 Feb 2019, Ming Lei wrote:
> + if (affd->calc_sets) {
> + affd->calc_sets(affd, nvecs);
> + } else if (!affd->nr_sets) {
> + affd->nr_sets = 1;
> + affd->set_size[0] = affvecs;
Hrmpf. I suggested that to you to get rid of the nr_sets local vari
On Thu, 14 Feb 2019, Ming Lei wrote:
> /**
> * struct irq_affinity - Description for automatic irq affinity assignements
> * @pre_vectors: Don't apply affinity to @pre_vectors at beginning of
> @@ -266,13 +268,13 @@ struct irq_affinity_notify {
> * @post_vectors:Don't apply affinity
On Thu, 14 Feb 2019, 陈华才 wrote:
Please do not top post
> I'll test next week, but 4.19 has the same problem, how to fix that for 4.19?
By applying the very same patch perhaps?
Thanks,
tglx
On Thu, 14 Feb 2019, Ming Lei wrote:
> +static void nvme_calc_irq_sets(struct irq_affinity *affd, int nvecs)
> +{
> + struct nvme_dev *dev = affd->priv;
> +
> + nvme_calc_io_queues(dev, nvecs);
> +
> + affd->set_size[HCTX_TYPE_DEFAULT] = dev->io_queues[HCTX_TYPE_DEFAULT];
> + affd->
On Wed, 13 Feb 2019, Keith Busch wrote:
Cc+ Huacai Chen
> On Wed, Feb 13, 2019 at 10:41:55PM +0100, Thomas Gleixner wrote:
> > Btw, while I have your attention. There popped up an issue recently related
> > to that affinity logic.
> >
> > The curren
On Wed, 13 Feb 2019, Keith Busch wrote:
> On Wed, Feb 13, 2019 at 09:56:36PM +0100, Thomas Gleixner wrote:
> > On Wed, 13 Feb 2019, Bjorn Helgaas wrote:
> > > On Wed, Feb 13, 2019 at 06:50:37PM +0800, Ming Lei wrote:
> > > > We have to ask driver to re-caculate s
On Wed, 13 Feb 2019, Bjorn Helgaas wrote:
> On Wed, Feb 13, 2019 at 06:50:40PM +0800, Ming Lei wrote:
> > Currently pre-caculate each set vectors, and this way requires same
> > 'max_vecs' and 'min_vecs' passed to pci_alloc_irq_vectors_affinity(),
> > then nvme_setup_irqs() has to retry in case of
On Wed, 13 Feb 2019, Bjorn Helgaas wrote:
> > Add a new callback of .calc_sets into 'struct irq_affinity' so that
> > driver can caculate set vectors after IRQ vector is allocated and
> > before spread IRQ vectors. Add 'priv' so that driver may retrieve
> > its private data via the 'struct irq_affi
On Wed, 13 Feb 2019, Bjorn Helgaas wrote:
> On Wed, Feb 13, 2019 at 06:50:37PM +0800, Ming Lei wrote:
> > Currently all parameters in 'affd' are read-only, so 'affd' is marked
> > as const in both pci_alloc_irq_vectors_affinity() and
> > irq_create_affinity_masks().
>
> s/all parameters in 'affd
On Tue, 12 Feb 2019, Ming Lei wrote:
> Hi,
>
> Currently pre-caculated set vectors are provided by driver for
> allocating & spread vectors. This way only works when drivers passes
> same 'max_vecs' and 'min_vecs' to pci_alloc_irq_vectors_affinity(),
> also requires driver to retry the allocating
On Tue, 12 Feb 2019, Ming Lei wrote:
> Currently pre-caculated set vectors are provided by driver for
> allocating & spread vectors. This way only works when drivers passes
> same 'max_vecs' and 'min_vecs' to pci_alloc_irq_vectors_affinity(),
> also requires driver to retry the allocating & spread
On Tue, 12 Feb 2019, Ming Lei wrote:
> Currently the array of irq set vectors is provided by driver.
>
> irq_create_affinity_masks() can be simplied a bit by treating the
> non-irq-set case as single irq set.
>
> So move this array into 'struct irq_affinity', and pre-define the max
> set number
Ming,
On Mon, 11 Feb 2019, Bjorn Helgaas wrote:
> On Mon, Feb 11, 2019 at 11:54:00AM +0800, Ming Lei wrote:
> > On Sun, Feb 10, 2019 at 05:30:41PM +0100, Thomas Gleixner wrote:
> > > On Fri, 25 Jan 2019, Ming Lei wrote:
> > >
> > > > This patch intr
On Fri, 25 Jan 2019, Ming Lei wrote:
> +static int nvme_setup_affinity(const struct irq_affinity *affd,
> +struct irq_affinity_desc *masks,
> +unsigned int nmasks)
> +{
> + struct nvme_dev *dev = affd->priv;
> + int affvecs = nmasks -
On Fri, 25 Jan 2019, Ming Lei wrote:
> Use the callback of .setup_affinity() to re-caculate number
> of queues, and build irqs affinity with help of irq_build_affinity().
>
> Then nvme_setup_irqs() gets simplified a lot.
I'm pretty sure you can achieve the same by reworking the core code without
Ming,
On Fri, 25 Jan 2019, Ming Lei wrote:
> This patch introduces callback of .setup_affinity into 'struct
> irq_affinity', so that:
Please see Documentation/process/submitting-patches.rst. Search for 'This
patch'
>
> 1) allow drivers to customize the affinity for managed IRQ, for
> exam
On Sat, 9 Feb 2019, Jens Axboe wrote:
> +static void io_commit_cqring(struct io_ring_ctx *ctx)
> +{
> + struct io_cq_ring *ring = ctx->cq_ring;
> +
> + if (ctx->cached_cq_tail != READ_ONCE(ring->r.tail)) {
> + /* order cqe stores with ring update */
This lacks a reference to th
On Fri, 1 Feb 2019, Hannes Reinecke wrote:
> Thing is, if we have _managed_ CPU hotplug (ie if the hardware provides some
> means of quiescing the CPU before hotplug) then the whole thing is trivial;
> disable SQ and wait for all outstanding commands to complete.
> Then trivially all requests are c
Chen,
On Fri, 18 Jan 2019, Huacai Chen wrote:
> > > I did not say that you removed all NULL returns. I said that this function
> > > can return NULL for other reasons and then the same situation will happen.
> > >
> > > If the masks pointer returned is NULL then the calling code or any
> > > subse
r scanners which is just
contrary to the intent of SPDX identifiers to provide clear and non
ambiguous license information. Aside of that the value add of this
notice is below zero,
Fixes: 6a5ac9846508 ("block: Make struct request_queue smaller for
CONFIG_BLK_DEV_ZONED=n")
S
Jens,
On Sun, 4 Nov 2018, Thomas Gleixner wrote:
> On Sun, 4 Nov 2018, Jens Axboe wrote:
> > On 11/4/18 5:02 AM, Thomas Gleixner wrote:
> > > So I assume, that I can pick up Mings series instead.
> >
> > Yes, let's do that.
> >
> > > There i
On Sun, 4 Nov 2018, Jens Axboe wrote:
Cc'ing Long with a hopefully working E-Mail address. The previous one
bounced because I stupidly copied the wrong one...
> On 11/4/18 5:02 AM, Thomas Gleixner wrote:
> > Jens,
> >
> > On Sat, 3 Nov 2018, Jens Axboe wrote:
> >
Jens,
On Sat, 3 Nov 2018, Jens Axboe wrote:
> On 11/2/18 8:59 AM, Ming Lei wrote:
> > Hi Jens,
> >
> > As I mentioned, there are at least two issues in the patch of '
> > irq: add support for allocating (and affinitizing) sets of IRQs':
> >
> > 1) it is wrong to pass 'mask + usedvec' to irq_bui
On Tue, 30 Oct 2018, Jens Axboe wrote:
> On 10/30/18 11:25 AM, Thomas Gleixner wrote:
> > Jens,
> >
> > On Tue, 30 Oct 2018, Jens Axboe wrote:
> >> On 10/30/18 10:02 AM, Keith Busch wrote:
> >>> pci_alloc_irq_vectors_affinity() starts at the provided max
Jens,
On Tue, 30 Oct 2018, Jens Axboe wrote:
> On 10/30/18 10:02 AM, Keith Busch wrote:
> > pci_alloc_irq_vectors_affinity() starts at the provided max_vecs. If
> > that doesn't work, it will iterate down to min_vecs without returning to
> > the caller. The caller doesn't have a chance to adjust i
affinitized correctly across the machine.
>
> Cc: Thomas Gleixner
> Cc: linux-ker...@vger.kernel.org
> Reviewed-by: Hannes Reinecke
> Signed-off-by: Jens Axboe
This looks good.
Vs. merge logistics: I'm expecting some other changes in that area as per
discussion with megasas (I
On Fri, 6 Apr 2018, Thomas Gleixner wrote:
> On Fri, 6 Apr 2018, Ming Lei wrote:
> >
> > I will post V4 soon by using cpu_present_mask in the 1st stage irq spread.
> > And it should work fine for Kashyap's case in normal cases.
>
> No need to resend. I've ch
On Fri, 6 Apr 2018, Ming Lei wrote:
>
> I will post V4 soon by using cpu_present_mask in the 1st stage irq spread.
> And it should work fine for Kashyap's case in normal cases.
No need to resend. I've changed it already and will push it out after
lunch.
Thanks,
tglx
On Wed, 4 Apr 2018, Ming Lei wrote:
> On Wed, Apr 04, 2018 at 02:45:18PM +0200, Thomas Gleixner wrote:
> > Now the 4 offline CPUs are plugged in again. These CPUs won't ever get an
> > interrupt as all interrupts stay on CPU 0-3 unless one of these CPUs is
> > unplugged.
On Wed, 4 Apr 2018, Ming Lei wrote:
> On Wed, Apr 04, 2018 at 10:25:16AM +0200, Thomas Gleixner wrote:
> > In the example above:
> >
> > > > > irq 39, cpu list 0,4
> > > > > irq 40, cpu list 1,6
> > > > > irq 41, cp
On Wed, 4 Apr 2018, Thomas Gleixner wrote:
> I'm aware how that hw-queue stuff works. But that only works if the
> spreading algorithm makes the interrupts affine to offline/not-present CPUs
> when the block device is initialized.
>
> In the example above:
>
> > >
On Wed, 4 Apr 2018, Ming Lei wrote:
> On Tue, Apr 03, 2018 at 03:32:21PM +0200, Thomas Gleixner wrote:
> > On Thu, 8 Mar 2018, Ming Lei wrote:
> > > 1) before 84676c1f21 ("genirq/affinity: assign vectors to all possible
> > > CPUs")
> > >
On Thu, 8 Mar 2018, Ming Lei wrote:
> 1) before 84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs")
> irq 39, cpu list 0
> irq 40, cpu list 1
> irq 41, cpu list 2
> irq 42, cpu list 3
>
> 2) after 84676c1f21 ("genirq/affinity: assign vectors to all possible
Ming,
On Fri, 30 Mar 2018, Ming Lei wrote:
> On Fri, Mar 09, 2018 at 04:08:19PM +0100, Thomas Gleixner wrote:
> > Thoughts?
>
> Given this patchset doesn't have effect on normal machines without
> supporting physical CPU hotplug, it can fix performance regression on
On Fri, 9 Mar 2018, Ming Lei wrote:
> On Fri, Mar 09, 2018 at 11:08:54AM +0100, Thomas Gleixner wrote:
> > > > So my understanding is that these irq patches are enhancements and not
> > > > bug
> > > > fixes. I'll queue them for 4.17 then.
> &g
On Fri, 9 Mar 2018, Ming Lei wrote:
> On Fri, Mar 09, 2018 at 12:20:09AM +0100, Thomas Gleixner wrote:
> > On Thu, 8 Mar 2018, Ming Lei wrote:
> > > Actually, it isn't a real fix, the real one is in the following two:
> > >
> > > 0c20244d458e scsi: m
On Thu, 8 Mar 2018, Ming Lei wrote:
> Actually, it isn't a real fix, the real one is in the following two:
>
> 0c20244d458e scsi: megaraid_sas: fix selection of reply queue
> ed6d043be8cd scsi: hpsa: fix selection of reply queue
Where are these commits? Neither Linus tree not -next kn
On Tue, 16 Jan 2018, Ming Lei wrote:
> On Mon, Jan 15, 2018 at 09:40:36AM -0800, Christoph Hellwig wrote:
> > On Tue, Jan 16, 2018 at 12:03:43AM +0800, Ming Lei wrote:
> > > Hi,
> > >
> > > These two patches fixes IO hang issue reported by Laurence.
> > >
> > > 84676c1f21 ("genirq/affinity: assi
On Tue, 16 Jan 2018, Ming Lei wrote:
> These two patches fixes IO hang issue reported by Laurence.
>
> 84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs")
> may cause one irq vector assigned to all offline CPUs, then this vector
> can't handle irq any more.
>
> The 1st patch moves
PUs.
>
> Reported-by: Christian Borntraeger
> Tested-by: Christian Borntraeger
> Tested-by: Stefan Haberland
> Cc: linux-ker...@vger.kernel.org
> Cc: Thomas Gleixner
FWIW, Acked-by: Thomas Gleixner
On Thu, 19 Oct 2017, Bart Van Assche wrote:
> On Wed, 2017-10-18 at 18:38 +0900, Byungchul Park wrote:
> > Sometimes, we want to initialize completions with sparate lockdep maps
> > to assign lock classes under control. For example, the workqueue code
> > manages lockdep maps, as it can classify lo
Jens,
On Thu, 5 Oct 2017, Jens Axboe wrote:
> On 10/05/2017 01:23 PM, Thomas Gleixner wrote:
> > Come on. You know very well that a prerequisite for global changes which is
> > not yet used in Linus tree can get merged post merge windew in order to
> > avoid massive
On Thu, 5 Oct 2017, Jens Axboe wrote:
> On 10/05/2017 11:49 AM, Kees Cook wrote:
> > Yes, totally true. tglx and I ended up meeting face-to-face at the
> > Kernel Recipes conference and we solved some outstanding design issues
> > with the conversion. The timing meant the new API went into -rc3,
>
m the arch code.
Signed-off-by: Christoph Hellwig
Cc: Sagi Grimberg
Cc: Jens Axboe
Cc: Keith Busch
Cc: linux-block@vger.kernel.org
Cc: linux-n...@lists.infradead.org
Link: http://lkml.kernel.org/r/20170603140403.27379-5-...@lst.de
Signed-off-by: Thomas Gleixner
---
kernel/irq/affinity.c |
On Sat, 3 Jun 2017, Christoph Hellwig wrote:
> This will allow us to spread MSI/MSI-X affinity over all present CPUs and
> thus better deal with systems where cpus are take on and offline all the
> time.
>
> Signed-off-by: Christoph Hellwig
> ---
> kernel/irq/manage.c | 6 +++---
> 1 file chang
On Sat, 3 Jun 2017, Christoph Hellwig wrote:
> +
> +bool irq_affinity_set(int irq, struct irq_desc *desc, const cpumask_t *mask)
> +{
> + struct irq_data *data = irq_desc_get_irq_data(desc);
> + struct irq_chip *chip = irq_data_get_irq_chip(data);
> + bool ret = false;
> +
> + if (!
On Fri, 16 Jun 2017, Thomas Gleixner wrote:
> On Sat, 3 Jun 2017, Christoph Hellwig wrote:
> > +bool irq_affinity_set(int irq, struct irq_desc *desc, const cpumask_t
> > *mask)
> > +{
> > + struct irq_data *data = irq_desc_get_irq_data(desc);
&g
On Sat, 3 Jun 2017, Christoph Hellwig wrote:
> +bool irq_affinity_set(int irq, struct irq_desc *desc, const cpumask_t *mask)
> +{
> + struct irq_data *data = irq_desc_get_irq_data(desc);
> + struct irq_chip *chip = irq_data_get_irq_chip(data);
> + bool ret = false;
> +
> + if (!irq_
On Sat, 3 Jun 2017, Christoph Hellwig wrote:
> +static void irq_affinity_online_irq(unsigned int irq, struct irq_desc *desc,
> + unsigned int cpu)
> +{
> + const struct cpumask *affinity;
> + struct irq_data *data;
> + struct irq_chip *chip;
> + unsig
On Sat, 3 Jun 2017, Christoph Hellwig wrote:
> +static void irq_affinity_online_irq(unsigned int irq, struct irq_desc *desc,
> + unsigned int cpu)
> +{
> +
> + cpumask_and(mask, affinity, cpu_online_mask);
> + cpumask_set_cpu(cpu, mask);
> + if (irqd_has_
On Sat, 3 Jun 2017, Christoph Hellwig wrote:
> +
> +bool irq_affinity_set(int irq, struct irq_desc *desc, const cpumask_t *mask)
This should be named irq_affinity_force() because it circumvents the 'move
in irq context' mechanism. I'll do that myself. No need to resend.
Thanks,
tglx
On Fri, 16 Jun 2017, Christoph Hellwig wrote:
> can you take a look at the generic patches as they are the required
> base for the block work?
It's next on my ever growing todo list
On Fri, 19 May 2017, Christoph Hellwig wrote:
> Factor out code from the x86 cpu hot plug code to program the affinity
> for a vector for a hot plug / hot unplug event.
> +bool irq_affinity_set(int irq, struct irq_desc *desc, const cpumask_t *mask)
> +{
> + struct irq_data *data = irq_desc_get
On Fri, 19 May 2017, Christoph Hellwig wrote:
> - /* Stabilize the cpumasks */
> - get_online_cpus();
How is that protected against physical CPU hotplug? Physical CPU hotplug
manipulates the present mask.
> - nodes = get_nodes_in_cpumask(cpu_online_mask, &nodemsk);
> + nodes = get
On Mon, 13 Feb 2017, Jens Axboe wrote:
> On 02/13/2017 07:14 AM, Thomas Gleixner wrote:
> > Gabriel reported the lockdep splat below while investigating something
> > different.
> >
> > Explanation for the splat is in the function comment above
> > del_timer_sy
Gabriel reported the lockdep splat below while investigating something
different.
Explanation for the splat is in the function comment above
del_timer_sync().
I can reproduce it as well and it's clearly broken.
Thanks,
tglx
---
[ 81.518032] ==
On Fri, 3 Feb 2017, Christoph Hellwig wrote:
> @@ -127,6 +127,7 @@ enum cpuhp_state {
> CPUHP_AP_ONLINE_IDLE,
> CPUHP_AP_SMPBOOT_THREADS,
> CPUHP_AP_X86_VDSO_VMA_ONLINE,
> + CPUHP_AP_IRQ_AFFINIY_ONLINE,
s/AFFINIY/AFFINITY/ perhaps?
> +static void __irq_affinity_set(unsigned
On Wed, 9 Nov 2016, Christoph Hellwig wrote:
> On Wed, Nov 09, 2016 at 08:51:35AM +0100, Thomas Gleixner wrote:
> > It's available from
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git irq/for-block
> >
> > for you to pull into the block t
On Tue, 8 Nov 2016, Jens Axboe wrote:
> On 11/08/2016 06:15 PM, Christoph Hellwig wrote:
> > This series adds support for automatic interrupt assignment to devices
> > that have a few vectors that are set aside for admin or config purposes
> > and thus should not fall into the general per-cpu assg
On Tue, 8 Nov 2016, Christoph Hellwig wrote:
> On Tue, Nov 08, 2016 at 03:59:16PM +0100, Hannes Reinecke wrote:
> >
> > Which you don't in this patch:
>
> True. We will always in the end, but the split isn't right, we'll
> need to pass the non-NULL argument starting in this patch.
No, in the pr
On Tue, 8 Nov 2016, Hannes Reinecke wrote:
> Add a reverse-mapping function to return the interrupt vector for
> any CPU if interrupt affinity is enabled.
>
> Signed-off-by: Hannes Reinecke
> ---
> drivers/pci/msi.c | 36
> include/linux/pci.h | 1 +
> 2
On Tue, 8 Nov 2016, Hannes Reinecke wrote:
> Shouldn't you check for NULL affd here?
No. The introduction of the default affinity struct should happen in that
patch and it should be handed down instead of NULL. Ditto for the next
patch.
Thanks,
tglx
--
To unsubscribe from this list: send
On Sun, 6 Nov 2016, Christoph Hellwig wrote:
> drivers/pci/msi.c | 61
> include/linux/interrupt.h | 26 ---
> include/linux/pci.h | 14 ++
> kernel/irq/affinity.c | 65
> ---
Alexander,
On Wed, 21 Sep 2016, Alexander Gordeev wrote:
> On Wed, Sep 14, 2016 at 04:18:48PM +0200, Christoph Hellwig wrote:
> > +/**
> > + * irq_calc_affinity_vectors - Calculate to optimal number of vectors for
> > a given affinity mask
> > + * @affinity: The affinity mask to spre
Replace the block-mq notifier list management with the multi instance
facility in the cpu hotplug state machine.
Signed-off-by: Thomas Gleixner
Cc: Jens Axboe
Cc: Peter Zijlstra
Cc: linux-block@vger.kernel.org
Cc: r...@linutronix.de
Cc: Christoph Hellwing
---
block/Makefile |2
From: Sebastian Andrzej Siewior
Install the callbacks via the state machine so we can phase out the cpu
hotplug notifiers mess.
Signed-off-by: Sebastian Andrzej Siewior
Signed-off-by: Thomas Gleixner
Cc: Jens Axboe
Cc: Peter Zijlstra
Cc: linux-block@vger.kernel.org
Cc: r...@linutronix.de
The following series converts block/mq to the new hotplug state
machine. The series is against block.git/for-next and depends on
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git smp/for-block
This branch contains the necessary infrastructure for multi-instance
callbacks which allows u
On Tue, 20 Sep 2016, Alexander Gordeev wrote:
> On Mon, Sep 19, 2016 at 03:50:07PM +0200, Christoph Hellwig wrote:
> > On Mon, Sep 19, 2016 at 09:30:58AM +0200, Alexander Gordeev wrote:
> > > > INIT_LIST_HEAD(&desc->list);
> > > > desc->dev = dev;
> > > > + desc->nvec_used = n
On Tue, 20 Sep 2016, Christoph Hellwing wrote:
> On Mon, Sep 19, 2016 at 09:28:20PM -0000, Thomas Gleixner wrote:
> > From: Sebastian Andrzej Siewior
> >
> > Install the callbacks via the state machine so we can phase out the cpu
> > hotplug notifiers..
>
>
From: Sebastian Andrzej Siewior
Install the callbacks via the state machine so we can phase out the cpu
hotplug notifiers mess.
Signed-off-by: Sebastian Andrzej Siewior
Cc: Peter Zijlstra
Cc: Jens Axboe
Cc: r...@linutronix.de
Signed-off-by: Thomas Gleixner
---
block/blk-mq.c | 87
This patch only reserves two CPU hotplug states for block/mq so the block tree
can apply the conversion patches.
Signed-off-by: Sebastian Andrzej Siewior
Cc: Peter Zijlstra
Cc: Jens Axboe
Cc: r...@linutronix.de
Signed-off-by: Thomas Gleixner
---
include/linux/cpuhotplug.h |2 ++
1 file
The following series converts block/mq to the new hotplug state
machine. Patch 1/3 reserves the states for the block layer and is already
applied to
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git smp/for-block
to avoid merge conflicts. This branch can be pulled into the block layer
From: Sebastian Andrzej Siewior
Install the callbacks via the state machine so we can phase out the cpu
hotplug notifiers..
Signed-off-by: Sebastian Andrzej Siewior
Cc: Peter Zijlstra
Cc: Jens Axboe
Cc: r...@linutronix.de
Signed-off-by: Thomas Gleixner
---
block/blk-mq-cpu.c | 15
1 - 100 of 101 matches
Mail list logo