On Fri, 6 Apr 2018, Thomas Gleixner wrote:
> On Fri, 6 Apr 2018, Ming Lei wrote:
> >
> > I will post V4 soon by using cpu_present_mask in the 1st stage irq spread.
> > And it should work fine for Kashyap's case in normal cases.
>
> No need to resend. I've changed it a
On Fri, 6 Apr 2018, Ming Lei wrote:
>
> I will post V4 soon by using cpu_present_mask in the 1st stage irq spread.
> And it should work fine for Kashyap's case in normal cases.
No need to resend. I've changed it already and will push it out after
lunch.
Thanks,
tglx
On Wed, 4 Apr 2018, Ming Lei wrote:
> On Wed, Apr 04, 2018 at 02:45:18PM +0200, Thomas Gleixner wrote:
> > Now the 4 offline CPUs are plugged in again. These CPUs won't ever get an
> > interrupt as all interrupts stay on CPU 0-3 unless one of these CPUs is
> > unplugged. U
On Wed, 4 Apr 2018, Ming Lei wrote:
> On Wed, Apr 04, 2018 at 10:25:16AM +0200, Thomas Gleixner wrote:
> > In the example above:
> >
> > > > > irq 39, cpu list 0,4
> > > > > irq 40, cpu list 1,6
> > > > >
On Wed, 4 Apr 2018, Thomas Gleixner wrote:
> I'm aware how that hw-queue stuff works. But that only works if the
> spreading algorithm makes the interrupts affine to offline/not-present CPUs
> when the block device is initialized.
>
> In the example above:
>
> > > >
On Wed, 4 Apr 2018, Ming Lei wrote:
> On Tue, Apr 03, 2018 at 03:32:21PM +0200, Thomas Gleixner wrote:
> > On Thu, 8 Mar 2018, Ming Lei wrote:
> > > 1) before 84676c1f21 ("genirq/affinity: assign vectors to all possible
> > > CPUs")
> > >
On Thu, 8 Mar 2018, Ming Lei wrote:
> 1) before 84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs")
> irq 39, cpu list 0
> irq 40, cpu list 1
> irq 41, cpu list 2
> irq 42, cpu list 3
>
> 2) after 84676c1f21 ("genirq/affinity: assign vectors to all possible
Ming,
On Fri, 30 Mar 2018, Ming Lei wrote:
> On Fri, Mar 09, 2018 at 04:08:19PM +0100, Thomas Gleixner wrote:
> > Thoughts?
>
> Given this patchset doesn't have effect on normal machines without
> supporting physical CPU hotplug, it can fix performance regression on
>
On Fri, 9 Mar 2018, Ming Lei wrote:
> On Fri, Mar 09, 2018 at 11:08:54AM +0100, Thomas Gleixner wrote:
> > > > So my understanding is that these irq patches are enhancements and not
> > > > bug
> > > > fixes. I'll queue them for 4.17 then.
> > &g
On Fri, 9 Mar 2018, Ming Lei wrote:
> On Fri, Mar 09, 2018 at 12:20:09AM +0100, Thomas Gleixner wrote:
> > On Thu, 8 Mar 2018, Ming Lei wrote:
> > > Actually, it isn't a real fix, the real one is in the following two:
> > >
> > > 0c20244d458e scsi: megara
On Thu, 8 Mar 2018, Ming Lei wrote:
> Actually, it isn't a real fix, the real one is in the following two:
>
> 0c20244d458e scsi: megaraid_sas: fix selection of reply queue
> ed6d043be8cd scsi: hpsa: fix selection of reply queue
Where are these commits? Neither Linus tree not -next
On Tue, 16 Jan 2018, Ming Lei wrote:
> On Mon, Jan 15, 2018 at 09:40:36AM -0800, Christoph Hellwig wrote:
> > On Tue, Jan 16, 2018 at 12:03:43AM +0800, Ming Lei wrote:
> > > Hi,
> > >
> > > These two patches fixes IO hang issue reported by Laurence.
> > >
> > > 84676c1f21 ("genirq/affinity:
On Tue, 16 Jan 2018, Ming Lei wrote:
> These two patches fixes IO hang issue reported by Laurence.
>
> 84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs")
> may cause one irq vector assigned to all offline CPUs, then this vector
> can't handle irq any more.
>
> The 1st patch
t not present CPUs.
>
> Reported-by: Christian Borntraeger <borntrae...@de.ibm.com>
> Tested-by: Christian Borntraeger <borntrae...@de.ibm.com>
> Tested-by: Stefan Haberland <s...@linux.vnet.ibm.com>
> Cc: linux-ker...@vger.kernel.org
> Cc: Thomas Gleixner <t...@linutronix.de>
FWIW, Acked-by: Thomas Gleixner <t...@linutronix.de>
On Thu, 19 Oct 2017, Bart Van Assche wrote:
> On Wed, 2017-10-18 at 18:38 +0900, Byungchul Park wrote:
> > Sometimes, we want to initialize completions with sparate lockdep maps
> > to assign lock classes under control. For example, the workqueue code
> > manages lockdep maps, as it can classify
Jens,
On Thu, 5 Oct 2017, Jens Axboe wrote:
> On 10/05/2017 01:23 PM, Thomas Gleixner wrote:
> > Come on. You know very well that a prerequisite for global changes which is
> > not yet used in Linus tree can get merged post merge windew in order to
> > avoid massive
On Thu, 5 Oct 2017, Jens Axboe wrote:
> On 10/05/2017 11:49 AM, Kees Cook wrote:
> > Yes, totally true. tglx and I ended up meeting face-to-face at the
> > Kernel Recipes conference and we solved some outstanding design issues
> > with the conversion. The timing meant the new API went into -rc3,
>
l.org/r/20170603140403.27379-5-...@lst.de
Signed-off-by: Thomas Gleixner <t...@linutronix.de>
---
kernel/irq/affinity.c | 76 +-
1 file changed, 63 insertions(+), 13 deletions(-)
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@
On Sat, 3 Jun 2017, Christoph Hellwig wrote:
> This will allow us to spread MSI/MSI-X affinity over all present CPUs and
> thus better deal with systems where cpus are take on and offline all the
> time.
>
> Signed-off-by: Christoph Hellwig
> ---
> kernel/irq/manage.c | 6 +++---
>
On Sat, 3 Jun 2017, Christoph Hellwig wrote:
> +
> +bool irq_affinity_set(int irq, struct irq_desc *desc, const cpumask_t *mask)
> +{
> + struct irq_data *data = irq_desc_get_irq_data(desc);
> + struct irq_chip *chip = irq_data_get_irq_chip(data);
> + bool ret = false;
> +
> + if
On Fri, 16 Jun 2017, Thomas Gleixner wrote:
> On Sat, 3 Jun 2017, Christoph Hellwig wrote:
> > +bool irq_affinity_set(int irq, struct irq_desc *desc, const cpumask_t
> > *mask)
> > +{
> > + struct irq_data *data = irq_desc_get_irq_data(desc);
&g
On Sat, 3 Jun 2017, Christoph Hellwig wrote:
> +bool irq_affinity_set(int irq, struct irq_desc *desc, const cpumask_t *mask)
> +{
> + struct irq_data *data = irq_desc_get_irq_data(desc);
> + struct irq_chip *chip = irq_data_get_irq_chip(data);
> + bool ret = false;
> +
> + if
On Sat, 3 Jun 2017, Christoph Hellwig wrote:
> +static void irq_affinity_online_irq(unsigned int irq, struct irq_desc *desc,
> + unsigned int cpu)
> +{
> + const struct cpumask *affinity;
> + struct irq_data *data;
> + struct irq_chip *chip;
> +
On Sat, 3 Jun 2017, Christoph Hellwig wrote:
> +static void irq_affinity_online_irq(unsigned int irq, struct irq_desc *desc,
> + unsigned int cpu)
> +{
> +
> + cpumask_and(mask, affinity, cpu_online_mask);
> + cpumask_set_cpu(cpu, mask);
> + if
On Sat, 3 Jun 2017, Christoph Hellwig wrote:
> +
> +bool irq_affinity_set(int irq, struct irq_desc *desc, const cpumask_t *mask)
This should be named irq_affinity_force() because it circumvents the 'move
in irq context' mechanism. I'll do that myself. No need to resend.
Thanks,
tglx
On Fri, 16 Jun 2017, Christoph Hellwig wrote:
> can you take a look at the generic patches as they are the required
> base for the block work?
It's next on my ever growing todo list
On Fri, 19 May 2017, Christoph Hellwig wrote:
> Factor out code from the x86 cpu hot plug code to program the affinity
> for a vector for a hot plug / hot unplug event.
> +bool irq_affinity_set(int irq, struct irq_desc *desc, const cpumask_t *mask)
> +{
> + struct irq_data *data =
On Fri, 19 May 2017, Christoph Hellwig wrote:
> - /* Stabilize the cpumasks */
> - get_online_cpus();
How is that protected against physical CPU hotplug? Physical CPU hotplug
manipulates the present mask.
> - nodes = get_nodes_in_cpumask(cpu_online_mask, );
> + nodes =
Gabriel reported the lockdep splat below while investigating something
different.
Explanation for the splat is in the function comment above
del_timer_sync().
I can reproduce it as well and it's clearly broken.
Thanks,
tglx
---
[ 81.518032]
On Fri, 3 Feb 2017, Christoph Hellwig wrote:
> @@ -127,6 +127,7 @@ enum cpuhp_state {
> CPUHP_AP_ONLINE_IDLE,
> CPUHP_AP_SMPBOOT_THREADS,
> CPUHP_AP_X86_VDSO_VMA_ONLINE,
> + CPUHP_AP_IRQ_AFFINIY_ONLINE,
s/AFFINIY/AFFINITY/ perhaps?
> +static void __irq_affinity_set(unsigned
On Tue, 8 Nov 2016, Hannes Reinecke wrote:
> Add a reverse-mapping function to return the interrupt vector for
> any CPU if interrupt affinity is enabled.
>
> Signed-off-by: Hannes Reinecke
> ---
> drivers/pci/msi.c | 36
>
On Sun, 6 Nov 2016, Christoph Hellwig wrote:
> drivers/pci/msi.c | 61
> include/linux/interrupt.h | 26 ---
> include/linux/pci.h | 14 ++
> kernel/irq/affinity.c | 65
>
Alexander,
On Wed, 21 Sep 2016, Alexander Gordeev wrote:
> On Wed, Sep 14, 2016 at 04:18:48PM +0200, Christoph Hellwig wrote:
> > +/**
> > + * irq_calc_affinity_vectors - Calculate to optimal number of vectors for
> > a given affinity mask
> > + * @affinity: The affinity mask to
Replace the block-mq notifier list management with the multi instance
facility in the cpu hotplug state machine.
Signed-off-by: Thomas Gleixner <t...@linutronix.de>
Cc: Jens Axboe <ax...@kernel.dk>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: linux-block@vger.kernel.org
Cc: r..
From: Sebastian Andrzej Siewior <bige...@linutronix.de>
Install the callbacks via the state machine so we can phase out the cpu
hotplug notifiers mess.
Signed-off-by: Sebastian Andrzej Siewior <bige...@linutronix.de>
Signed-off-by: Thomas Gleixner <t...@linutronix.de>
On Tue, 20 Sep 2016, Christoph Hellwing wrote:
> On Mon, Sep 19, 2016 at 09:28:20PM -0000, Thomas Gleixner wrote:
> > From: Sebastian Andrzej Siewior <bige...@linutronix.de>
> >
> > Install the callbacks via the state machine so we can phase out the cpu
> >
t;ax...@kernel.dk>
Cc: r...@linutronix.de
Signed-off-by: Thomas Gleixner <t...@linutronix.de>
---
block/blk-mq.c | 87 -
1 file changed, 43 insertions(+), 44 deletions(-)
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2
The following series converts block/mq to the new hotplug state
machine. Patch 1/3 reserves the states for the block layer and is already
applied to
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git smp/for-block
to avoid merge conflicts. This branch can be pulled into the block
x...@kernel.dk>
Cc: r...@linutronix.de
Signed-off-by: Thomas Gleixner <t...@linutronix.de>
---
block/blk-mq-cpu.c | 15 +++
block/blk-mq.c | 21 +
block/blk-mq.h |2 +-
include/linux/blk-mq.h |2 +-
4 files changed,
39 matches
Mail list logo