On Thu, Oct 25, 2018 at 2:44 AM Zhoujie Wu wrote:
>
> The smeta area l2p mapping is empty, and actually the
> recovery procedure only need to restore data sector's l2p
> mapping. So ignore the smeta oob scan.
>
> Signed-off-by: Zhoujie Wu
> ---
> drivers/lightnvm/pblk-recovery.c | 5 +++--
> 1
This feature is gone from null_blk in the current Linux kernels.
It doesn't make sense to keep testing this on older kernels either,
as the legacy IO path is going away.
Signed-off-by: Jens Axboe
diff --git a/tests/block/022 b/tests/block/022
deleted file mode 100755
index
On 10/25/18 3:08 PM, Omar Sandoval wrote:
> On Thu, Oct 25, 2018 at 03:03:30PM -0600, Jens Axboe wrote:
>> This is no longer supported in recent kernels, get rid of
>> any testing of queue_mode=1. queue_mode=1 tested the legacy
>> IO path, which is going away completely. As such, there's
>> no
It's now unused.
Signed-off-by: Jens Axboe
---
block/blk-softirq.c| 20
include/linux/blkdev.h | 1 -
2 files changed, 21 deletions(-)
diff --git a/block/blk-softirq.c b/block/blk-softirq.c
index e47a2f751884..8ca0f6caf174 100644
--- a/block/blk-softirq.c
+++
Signed-off-by: Jens Axboe
---
block/blk-flush.c | 154 +-
block/blk.h | 4 +-
2 files changed, 31 insertions(+), 127 deletions(-)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 8b44b86779da..9baa9a119447 100644
---
dm supports both, and since we're killing off the legacy path
in general, get rid of it in dm as well.
Signed-off-by: Jens Axboe
---
drivers/md/Kconfig| 11 --
drivers/md/dm-core.h | 10 --
drivers/md/dm-mpath.c | 14 +-
drivers/md/dm-rq.c| 293
Straight forward conversion, there's room for improvement.
Signed-off-by: Jens Axboe
---
drivers/memstick/core/mspro_block.c | 121 +++-
1 file changed, 66 insertions(+), 55 deletions(-)
diff --git a/drivers/memstick/core/mspro_block.c
Requires a few changes to the FC transport class as well.
Cc: Johannes Thumshirn
Cc: Benjamin Block
Cc: linux-s...@vger.kernel.org
Signed-off-by: Jens Axboe
---
block/bsg-lib.c | 123 +++
drivers/scsi/scsi_transport_fc.c | 59 +--
2
This will ease in the conversion to blk-mq, where we can't set
a timeout handler after queue init.
Cc: Johannes Thumshirn
Cc: Benjamin Block
Cc: linux-s...@vger.kernel.org
Signed-off-by: Jens Axboe
---
block/bsg-lib.c | 3 ++-
drivers/scsi/scsi_transport_fc.c| 7
All drivers do unregister + cleanup, provide a helper for that.
Cc: Johannes Thumshirn
Cc: Benjamin Block
Cc: linux-s...@vger.kernel.org
Signed-off-by: Jens Axboe
---
block/bsg-lib.c | 7 +++
drivers/scsi/scsi_transport_fc.c| 6 ++
It's now unused, kill it.
Signed-off-by: Jens Axboe
---
Documentation/block/biodoc.txt | 88
block/Makefile | 2 +-
block/blk-core.c | 6 -
block/blk-mq-debugfs.c | 2 -
block/blk-mq-tag.c | 6 +-
block/blk-sysfs.c
The first round of this went into 4.20-rc, but we've still some of
them pending. This patch series converts the remaining drivers to
blk-mq. The ones that support dual paths (like SCSI and DM) have
the non-mq path removed. At the end, legacy IO code and schedulers
are killed off.
This patch
We only support mq devices now.
Signed-off-by: Jens Axboe
---
block/blk-cgroup.c | 8
1 file changed, 8 deletions(-)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 992da5592c6e..5f10d755ec52 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1446,8 +1446,6 @@ int
Everything is blk-mq at this point, so it doesn't make any sense
to have this option available as it does nothing.
Signed-off-by: Jens Axboe
---
block/Kconfig | 6 --
block/blk-wbt.c | 3 +--
2 files changed, 1 insertion(+), 8 deletions(-)
diff --git a/block/Kconfig b/block/Kconfig
index
This is dead code, any queue reaching this part has mq_ops
attached.
Signed-off-by: Jens Axboe
---
block/blk-merge.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 3561dcce2260..0128284bded4 100644
--- a/block/blk-merge.c
+++
No point in hiding what this does, just open code it in the
one spot where we are still using it.
Signed-off-by: Jens Axboe
---
block/blk-mq.c | 2 +-
include/linux/blkdev.h | 2 --
2 files changed, 1 insertion(+), 3 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index
This is no longer supported in recent kernels, get rid of
any testing of queue_mode=1.
Signed-off-by: Jens Axboe
diff --git a/tests/block/017 b/tests/block/017
index 715c4e59c514..cea29beaf062 100755
--- a/tests/block/017
+++ b/tests/block/017
@@ -26,27 +26,23 @@ show_inflight() {
test() {
This is no longer supported in recent kernels, get rid of
any testing of queue_mode=1. queue_mode=1 tested the legacy
IO path, which is going away completely. As such, there's
no point in doing anymore testing with it.
Signed-off-by: Jens Axboe
---
Replaces the two previous patches - covers
With multiple maps, nr_cpu_ids is no longer the maximum number of
hardware queues we support on a given devices. The initializer of
the tag_set can have set ->nr_hw_queues larger than the available
number of CPUs, since we can exceed that with multiple queue maps.
Signed-off-by: Jens Axboe
---
It can be useful for a user to verify what type a given hardware
queue is, expose this information in sysfs.
Signed-off-by: Jens Axboe
---
block/blk-mq-sysfs.c | 10 ++
1 file changed, 10 insertions(+)
diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
index
We use IOCB_HIPRI to poll for IO in the caller instead of scheduling.
This information is not available for (or after) IO submission. The
driver may make different queue choices based on the type of IO, so
make the fact that we will poll for this IO known to the lower layers
as well.
Add support for the tag set carrying multiple queue maps, and
for the driver to inform blk-mq how many it wishes to support
through setting set->nr_maps.
This adds an mq_ops helper for drivers that support more than 1
map, mq_ops->flags_to_type(). The function takes request/bio flags
and CPU, and
A driver may have a need to allocate multiple sets of MSI/MSI-X
interrupts, and have them appropriately affinitized. Add support for
defining a number of sets in the irq_affinity structure, of varying
sizes, and get each set affinitized correctly across the machine.
Cc: Thomas Gleixner
Cc:
Adds support for defining a variable number of poll queues, currently
configurable with the 'poll_queues' module parameter. Defaults to
a single poll queue.
And now we finally have poll support without triggering interrupts!
Signed-off-by: Jens Axboe
---
drivers/nvme/host/pci.c | 103
Add a queue offset to the tag map. This enables users to map
iteratively, for each queue map type they support.
Bump maximum number of supported maps to 2, we're now fully
able to support more than 1 map.
Signed-off-by: Jens Axboe
---
block/blk-mq-cpumap.c | 9 +
block/blk-mq-pci.c
NVMe does round-robin between queues by default, which means that
sharing a queue map for both reads and writes can be problematic
in terms of read servicing. It's much easier to flood the queue
with writes and reduce the read servicing.
Implement two queue maps, one for reads and one for writes.
Since we insert per hardware queue, we have to ensure that every
request on the plug list being inserted belongs to the same
hardware queue.
Signed-off-by: Jens Axboe
---
block/blk-mq.c | 27 +--
1 file changed, 25 insertions(+), 2 deletions(-)
diff --git
On Thu, Oct 18, 2018 at 12:31:45PM +0200, Jan Kara wrote:
>
> Hello,
>
> these two patches create two new tests for blktests as regression tests
> for my recently posted loopback device fixes. More details in individual
> patches.
Thanks, Jan, I applied 007 renamed to 006.
On Thu, Oct 25, 2018 at 03:03:30PM -0600, Jens Axboe wrote:
> This is no longer supported in recent kernels, get rid of
> any testing of queue_mode=1. queue_mode=1 tested the legacy
> IO path, which is going away completely. As such, there's
> no point in doing anymore testing with it.
>
>
The mapping used to be dependent on just the CPU location, but
now it's a tuple of { type, cpu} instead. This is a prep patch
for allowing a single software queue to map to multiple hardware
queues. No functional changes in this patch.
Signed-off-by: Jens Axboe
---
block/blk-mq-sched.c | 2
Prep patch for being able to place request based not just on
CPU location, but also on the type of request.
Signed-off-by: Jens Axboe
---
block/blk-flush.c | 7 +++---
block/blk-mq-debugfs.c | 4 +++-
block/blk-mq-sched.c | 16 ++
block/blk-mq-tag.c | 5 +++--
Doesn't do anything right now, but it's needed as a prep patch
to get the interfaces right.
Signed-off-by: Jens Axboe
---
block/blk-mq.h | 6 ++
1 file changed, 6 insertions(+)
diff --git a/block/blk-mq.h b/block/blk-mq.h
index 889f0069dd80..79c300faa7ce 100644
--- a/block/blk-mq.h
+++
This is in preparation for allowing multiple sets of maps per
queue, if so desired.
Signed-off-by: Jens Axboe
---
block/blk-mq-cpumap.c | 10
block/blk-mq-pci.c| 10
block/blk-mq-rdma.c | 4 ++--
block/blk-mq-virtio.c
It's just a pointer to set->mq_map, use that instead.
Signed-off-by: Jens Axboe
---
block/blk-mq.c | 13 -
block/blk-mq.h | 4 +++-
include/linux/blkdev.h | 2 --
3 files changed, 7 insertions(+), 12 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index
This series adds support for multiple queue maps for blk-mq.
Since blk-mq was introduced, it's only support a single queue
map. This means you can have 1 set of queues, and the mapping
purely depends on what CPU an IO originated from. With this
patch set, drivers can implement mappings that depend
On 10/25/2018 04:16 AM, Hans Holmberg wrote:
External Email
--
On Thu, Oct 25, 2018 at 2:44 AM Zhoujie Wu wrote:
The smeta area l2p mapping is empty, and actually the
recovery procedure only need to restore data sector's
The smeta area l2p mapping is empty, and actually the
recovery procedure only need to restore data sector's l2p
mapping. So ignore the smeta oob scan.
Signed-off-by: Zhoujie Wu
---
v2: Modified based on suggestion from Hans. The smeta may not start from
paddr 0 if the first block is bad. Use
After a one-year hiatus, the Linux Storage and Filesystems Conference (Vault)
returns in 2019, under the sponsorship and organization of the USENIX
Association. Vault brings together practitioners, implementers, users, and
researchers working on storage in open source and related projects.
We
38 matches
Mail list logo