Re: [PATCH V3 0/4] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-27 Thread Artem Bityutskiy
On Mon, 2018-03-26 at 10:39 +0200, Thorsten Leemhuis wrote: > Lo! Your friendly Linux regression tracker here ;-) > > On 08.03.2018 14:18, Artem Bityutskiy wrote: > > On Thu, 2018-03-08 at 18:53 +0800, Ming Lei wrote: > > > This patchset tries to spread among online CPUs as far as possible, so > >

Re: [PATCH 1/3] blk-mq: Allow PCI vector offset for mapping queues

2018-03-27 Thread Jens Axboe
On 3/27/18 9:39 AM, Keith Busch wrote: > The PCI interrupt vectors intended to be associated with a queue may > not start at 0; a driver may allocate pre_vectors for special use. This > patch adds an offset parameter so blk-mq may find the intended affinity > mask and updates all drivers using this

Re: [PATCH] blk-mq: only run mapped hw queues in blk_mq_run_hw_queues()

2018-03-27 Thread Jens Axboe
On 3/27/18 7:20 PM, Ming Lei wrote: > From commit 20e4d813931961fe ("blk-mq: simplify queue mapping & schedule > with each possisble CPU") on, it should be easier to see unmapped hctx > in some CPU topo, such as, hctx may not be mapped to any CPU. > > This patch avoids the warning in __blk_mq_dela

Re: [PATCH 3/3] nvme-pci: Separate IO and admin queue IRQ vectors

2018-03-27 Thread Ming Lei
On Tue, Mar 27, 2018 at 09:39:08AM -0600, Keith Busch wrote: > The admin and first IO queues shared the first irq vector, which has an > affinity mask including cpu0. If a system allows cpu0 to be offlined, > the admin queue may not be usable if no other CPUs in the affinity mask > are online. This

Re: [PATCH 2/3] nvme-pci: Remove unused queue parameter

2018-03-27 Thread Ming Lei
On Tue, Mar 27, 2018 at 09:39:07AM -0600, Keith Busch wrote: > All the queue memory is allocated up front. We don't take the node > into consideration when creating queues anymore, so removing the unused > parameter. > > Signed-off-by: Keith Busch > Reviewed-by: Christoph Hellwig > --- > v1 -> v

Re: [PATCH 1/3] blk-mq: Allow PCI vector offset for mapping queues

2018-03-27 Thread Ming Lei
On Tue, Mar 27, 2018 at 09:39:06AM -0600, Keith Busch wrote: > The PCI interrupt vectors intended to be associated with a queue may > not start at 0; a driver may allocate pre_vectors for special use. This > patch adds an offset parameter so blk-mq may find the intended affinity > mask and updates

[PATCH] blk-mq: only run mapped hw queues in blk_mq_run_hw_queues()

2018-03-27 Thread Ming Lei
>From commit 20e4d813931961fe ("blk-mq: simplify queue mapping & schedule with each possisble CPU") on, it should be easier to see unmapped hctx in some CPU topo, such as, hctx may not be mapped to any CPU. This patch avoids the warning in __blk_mq_delay_run_hw_queue() by checking if the hctx is m

Re: [PATCH 0/2] loop: don't hang on lo_ctl_mutex in ioctls

2018-03-27 Thread Jens Axboe
On 3/26/18 10:39 PM, Omar Sandoval wrote: > From: Omar Sandoval > > Hi, Jens, > > We hit an issue where a loop device on NFS (yes, I know) got stuck and a > bunch of losetup processes got stuck in uninterruptible sleep waiting > for lo_ctl_mutex as a result. Calling into the filesystem while hol

Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory

2018-03-27 Thread Logan Gunthorpe
On 27/03/18 02:47 AM, Jonathan Cameron wrote: > I'll see if I can get our PCI SIG people to follow this through and see if > it is just an omission or as Bjorn suggested, there is some reason we > aren't thinking of that makes it hard. That would be great! Thanks! Logan

[PATCH 1/3] blk-mq: Allow PCI vector offset for mapping queues

2018-03-27 Thread Keith Busch
The PCI interrupt vectors intended to be associated with a queue may not start at 0; a driver may allocate pre_vectors for special use. This patch adds an offset parameter so blk-mq may find the intended affinity mask and updates all drivers using this API accordingly. Cc: Don Brace Cc: Cc: Sig

Re: disk-io lockup in 4.14.13 kernel

2018-03-27 Thread Bart Van Assche
On 03/27/18 01:59, Jaco Kroon wrote: I triggered it hoping to get a stack trace of the process which is deadlocking finding where the lock is being taken that ends up blocking, but I then realized that you mentioned sleeping, which may end up not having a stack trace because there is no process a

[PATCH 2/3] nvme-pci: Remove unused queue parameter

2018-03-27 Thread Keith Busch
All the queue memory is allocated up front. We don't take the node into consideration when creating queues anymore, so removing the unused parameter. Signed-off-by: Keith Busch Reviewed-by: Christoph Hellwig --- v1 -> v2: Added review. drivers/nvme/host/pci.c | 10 +++--- 1 file change

[PATCH 3/3] nvme-pci: Separate IO and admin queue IRQ vectors

2018-03-27 Thread Keith Busch
The admin and first IO queues shared the first irq vector, which has an affinity mask including cpu0. If a system allows cpu0 to be offlined, the admin queue may not be usable if no other CPUs in the affinity mask are online. This is a problem since unlike IO queues, there is only one admin queue t

Re: 4.16-RC7 WARNING: CPU: 2 PID: 0 at block/blk-mq.c:1400 __blk_mq_delay_run_hw_queue

2018-03-27 Thread Christian Borntraeger
On 03/27/2018 02:01 PM, Ming Lei wrote: > Hi Stefan, > > On Tue, Mar 27, 2018 at 12:04:20PM +0200, Stefan Haberland wrote: >> Hi, >> >> I get the following warning in __blk_mq_delay_run_hw_queue when the >> scheduler is set to mq-deadline for DASD devices on s390. >> >> What I see is that for wh

Re: [PATCH 3/3] nvme-pci: Separate IO and admin queue IRQ vectors

2018-03-27 Thread Christoph Hellwig
> +static inline unsigned int nvme_ioq_vector(struct nvme_dev *dev, > + unsigned int qid) No need for the inline here I think. > +{ > + /* > + * A queue's vector matches the queue identifier unless the controller > + * has only one vector available. > + */ > + r

Re: [PATCH 2/3] nvme-pci: Remove unused queue parameter

2018-03-27 Thread Christoph Hellwig
Looks good, Reviewed-by: Christoph Hellwig

Re: [PATCH 1/3] blk-mq: Allow PCI vector offset for mapping queues

2018-03-27 Thread Christoph Hellwig
On Fri, Mar 23, 2018 at 04:19:21PM -0600, Keith Busch wrote: > The PCI interrupt vectors intended to be associated with a queue may > not start at 0. This patch adds an offset parameter so blk-mq may find > the intended affinity mask. The default value is 0 so existing drivers > that don't care abo

Re: 4.16-RC7 WARNING: CPU: 2 PID: 0 at block/blk-mq.c:1400 __blk_mq_delay_run_hw_queue

2018-03-27 Thread Stefan Haberland
This warning is harmless, please try the following patch: -- From 7b2b5139bfef80f44d1b1424e09ab35b715fbfdb Mon Sep 17 00:00:00 2001 From: Ming Lei Date: Tue, 27 Mar 2018 19:54:23 +0800 Subject: [PATCH] blk-mq: only run mapped hw queues in blk_mq_run_hw_queues() From commit 20e4d813931961fe

Re: 4.16-RC7 WARNING: CPU: 2 PID: 0 at block/blk-mq.c:1400 __blk_mq_delay_run_hw_queue

2018-03-27 Thread Ming Lei
Hi Stefan, On Tue, Mar 27, 2018 at 12:04:20PM +0200, Stefan Haberland wrote: > Hi, > > I get the following warning in __blk_mq_delay_run_hw_queue when the > scheduler is set to mq-deadline for DASD devices on s390. > > What I see is that for whatever reason there is a hctx nr 0 which has no > hc

4.16-RC7 WARNING: CPU: 2 PID: 0 at block/blk-mq.c:1400 __blk_mq_delay_run_hw_queue

2018-03-27 Thread Stefan Haberland
Hi, I get the following warning in __blk_mq_delay_run_hw_queue when the scheduler is set to mq-deadline for DASD devices on s390. What I see is that for whatever reason there is a hctx nr 0 which has no hctx->tags pointer set. From my observation it is always hctx nr 0 which has a tags NULL

Re: disk-io lockup in 4.14.13 kernel

2018-03-27 Thread Jaco Kroon
Hi Bart, > The above call trace means that SysRq-l was triggered, either via the keyboard > or through procfs. I don't think that there is any information in the above > that reveals the root cause of why a reboot was necessary. I triggered it hoping to get a stack trace of the process which is dea

Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory

2018-03-27 Thread Jonathan Cameron
On Mon, 26 Mar 2018 09:46:24 -0600 Logan Gunthorpe wrote: > On 26/03/18 08:01 AM, Bjorn Helgaas wrote: > > On Mon, Mar 26, 2018 at 12:11:38PM +0100, Jonathan Cameron wrote: > >> On Tue, 13 Mar 2018 10:43:55 -0600 > >> Logan Gunthorpe wrote: > >>> It turns out that root ports that support P2P

[PATCH 2/3] iomap: iomap_dio_rw() handles all sync writes

2018-03-27 Thread Dave Chinner
From: Dave Chinner Currently iomap_dio_rw() only handles (data)sync write completions for AIO. This means we can't optimised non-AIO IO to minimise device flushes as we can't tell the caller whether a flush is required or not. To solve this problem and enable further optimisations, make iomap_di

[PATCH 1/3] xfs: move generic_write_sync calls inwards

2018-03-27 Thread Dave Chinner
From: Dave Chinner To prepare for iomap iinfrastructure based DSYNC optimisations. While moving the code araound, move the XFS write bytes metric update for direct IO into xfs_dio_write_end_io callback so that we always capture the amount of data written via AIO+DIO. This fixes the problem where

[PATCH 3/3] iomap: Use FUA for pure data O_DSYNC DIO writes

2018-03-27 Thread Dave Chinner
From: Dave Chinner If we are doing direct IO writes with datasync semantics, we often have to flush metadata changes along with the data write. However, if we are overwriting existing data, there are no metadata changes that we need to flush. In this case, optimising the IO by using FUA write mak

[PATCH 0/3 V2] iomap: Use FUA for O_DSYNC DIO writes

2018-03-27 Thread Dave Chinner
Hi folks, This is a followup on my original patch to enable use of FUA writes for pure data O_DSYNC writes through the XFS and iomap based direct IO paths. This version has all of the changes christoph asked for, and splits it up into simpler patches. The performance improvements are detailed in t