On Mon, 2018-03-26 at 10:39 +0200, Thorsten Leemhuis wrote:
> Lo! Your friendly Linux regression tracker here ;-)
>
> On 08.03.2018 14:18, Artem Bityutskiy wrote:
> > On Thu, 2018-03-08 at 18:53 +0800, Ming Lei wrote:
> > > This patchset tries to spread among online CPUs as far as possible, so
> >
On 3/27/18 9:39 AM, Keith Busch wrote:
> The PCI interrupt vectors intended to be associated with a queue may
> not start at 0; a driver may allocate pre_vectors for special use. This
> patch adds an offset parameter so blk-mq may find the intended affinity
> mask and updates all drivers using this
On 3/27/18 7:20 PM, Ming Lei wrote:
> From commit 20e4d813931961fe ("blk-mq: simplify queue mapping & schedule
> with each possisble CPU") on, it should be easier to see unmapped hctx
> in some CPU topo, such as, hctx may not be mapped to any CPU.
>
> This patch avoids the warning in __blk_mq_dela
On Tue, Mar 27, 2018 at 09:39:08AM -0600, Keith Busch wrote:
> The admin and first IO queues shared the first irq vector, which has an
> affinity mask including cpu0. If a system allows cpu0 to be offlined,
> the admin queue may not be usable if no other CPUs in the affinity mask
> are online. This
On Tue, Mar 27, 2018 at 09:39:07AM -0600, Keith Busch wrote:
> All the queue memory is allocated up front. We don't take the node
> into consideration when creating queues anymore, so removing the unused
> parameter.
>
> Signed-off-by: Keith Busch
> Reviewed-by: Christoph Hellwig
> ---
> v1 -> v
On Tue, Mar 27, 2018 at 09:39:06AM -0600, Keith Busch wrote:
> The PCI interrupt vectors intended to be associated with a queue may
> not start at 0; a driver may allocate pre_vectors for special use. This
> patch adds an offset parameter so blk-mq may find the intended affinity
> mask and updates
>From commit 20e4d813931961fe ("blk-mq: simplify queue mapping & schedule
with each possisble CPU") on, it should be easier to see unmapped hctx
in some CPU topo, such as, hctx may not be mapped to any CPU.
This patch avoids the warning in __blk_mq_delay_run_hw_queue() by
checking if the hctx is m
On 3/26/18 10:39 PM, Omar Sandoval wrote:
> From: Omar Sandoval
>
> Hi, Jens,
>
> We hit an issue where a loop device on NFS (yes, I know) got stuck and a
> bunch of losetup processes got stuck in uninterruptible sleep waiting
> for lo_ctl_mutex as a result. Calling into the filesystem while hol
On 27/03/18 02:47 AM, Jonathan Cameron wrote:
> I'll see if I can get our PCI SIG people to follow this through and see if
> it is just an omission or as Bjorn suggested, there is some reason we
> aren't thinking of that makes it hard.
That would be great! Thanks!
Logan
The PCI interrupt vectors intended to be associated with a queue may
not start at 0; a driver may allocate pre_vectors for special use. This
patch adds an offset parameter so blk-mq may find the intended affinity
mask and updates all drivers using this API accordingly.
Cc: Don Brace
Cc:
Cc:
Sig
On 03/27/18 01:59, Jaco Kroon wrote:
I triggered it hoping to get a stack trace of the process which is
deadlocking finding where the lock is being taken that ends up blocking,
but I then realized that you mentioned sleeping, which may end up not
having a stack trace because there is no process a
All the queue memory is allocated up front. We don't take the node
into consideration when creating queues anymore, so removing the unused
parameter.
Signed-off-by: Keith Busch
Reviewed-by: Christoph Hellwig
---
v1 -> v2:
Added review.
drivers/nvme/host/pci.c | 10 +++---
1 file change
The admin and first IO queues shared the first irq vector, which has an
affinity mask including cpu0. If a system allows cpu0 to be offlined,
the admin queue may not be usable if no other CPUs in the affinity mask
are online. This is a problem since unlike IO queues, there is only
one admin queue t
On 03/27/2018 02:01 PM, Ming Lei wrote:
> Hi Stefan,
>
> On Tue, Mar 27, 2018 at 12:04:20PM +0200, Stefan Haberland wrote:
>> Hi,
>>
>> I get the following warning in __blk_mq_delay_run_hw_queue when the
>> scheduler is set to mq-deadline for DASD devices on s390.
>>
>> What I see is that for wh
> +static inline unsigned int nvme_ioq_vector(struct nvme_dev *dev,
> + unsigned int qid)
No need for the inline here I think.
> +{
> + /*
> + * A queue's vector matches the queue identifier unless the controller
> + * has only one vector available.
> + */
> + r
Looks good,
Reviewed-by: Christoph Hellwig
On Fri, Mar 23, 2018 at 04:19:21PM -0600, Keith Busch wrote:
> The PCI interrupt vectors intended to be associated with a queue may
> not start at 0. This patch adds an offset parameter so blk-mq may find
> the intended affinity mask. The default value is 0 so existing drivers
> that don't care abo
This warning is harmless, please try the following patch:
--
From 7b2b5139bfef80f44d1b1424e09ab35b715fbfdb Mon Sep 17 00:00:00 2001
From: Ming Lei
Date: Tue, 27 Mar 2018 19:54:23 +0800
Subject: [PATCH] blk-mq: only run mapped hw queues in blk_mq_run_hw_queues()
From commit 20e4d813931961fe
Hi Stefan,
On Tue, Mar 27, 2018 at 12:04:20PM +0200, Stefan Haberland wrote:
> Hi,
>
> I get the following warning in __blk_mq_delay_run_hw_queue when the
> scheduler is set to mq-deadline for DASD devices on s390.
>
> What I see is that for whatever reason there is a hctx nr 0 which has no
> hc
Hi,
I get the following warning in __blk_mq_delay_run_hw_queue when the
scheduler is set to mq-deadline for DASD devices on s390.
What I see is that for whatever reason there is a hctx nr 0 which has no
hctx->tags pointer set.
From my observation it is always hctx nr 0 which has a tags NULL
Hi Bart,
> The above call trace means that SysRq-l was triggered, either via the keyboard
> or through procfs. I don't think that there is any information in the above
> that reveals the root cause of why a reboot was necessary.
I triggered it hoping to get a stack trace of the process which is
dea
On Mon, 26 Mar 2018 09:46:24 -0600
Logan Gunthorpe wrote:
> On 26/03/18 08:01 AM, Bjorn Helgaas wrote:
> > On Mon, Mar 26, 2018 at 12:11:38PM +0100, Jonathan Cameron wrote:
> >> On Tue, 13 Mar 2018 10:43:55 -0600
> >> Logan Gunthorpe wrote:
> >>> It turns out that root ports that support P2P
From: Dave Chinner
Currently iomap_dio_rw() only handles (data)sync write completions
for AIO. This means we can't optimised non-AIO IO to minimise device
flushes as we can't tell the caller whether a flush is required or
not.
To solve this problem and enable further optimisations, make
iomap_di
From: Dave Chinner
To prepare for iomap iinfrastructure based DSYNC optimisations.
While moving the code araound, move the XFS write bytes metric
update for direct IO into xfs_dio_write_end_io callback so that we
always capture the amount of data written via AIO+DIO. This fixes
the problem where
From: Dave Chinner
If we are doing direct IO writes with datasync semantics, we often
have to flush metadata changes along with the data write. However,
if we are overwriting existing data, there are no metadata changes
that we need to flush. In this case, optimising the IO by using
FUA write mak
Hi folks,
This is a followup on my original patch to enable use of FUA writes
for pure data O_DSYNC writes through the XFS and iomap based direct
IO paths. This version has all of the changes christoph asked for,
and splits it up into simpler patches. The performance improvements
are detailed in t
26 matches
Mail list logo