On Tue, Jul 04, 2023 at 02:21:28PM +0200, Jan Kara wrote:
> +struct bdev_handle *blkdev_get_handle_by_dev(dev_t dev, blk_mode_t mode,
> + void *holder, const struct blk_holder_ops *hops)
> +{
> + struct bdev_handle *handle = kmalloc(sizeof(struct bdev_handle),
> +
Looks good.
Reviewed-by: Keith Busch
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
by Keith, maybe he can help to fill
> in what the proper notice should be?
Okay, this was initially introduced with 1d277a637a711a while employed with
Intel, so let's add for the history:
/*
* Copyright (c) 2015 Intel Corporation
* Keith Busch
*/
--
dm-devel mailing list
dm-devel@redhat.
_op and enforcing the appropriate segment
> limit - max_discard_segments for REQ_OP_DISCARDs and max_segments for
> everything else.
Looks good.
Reviewed-by: Keith Busch
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
On Wed, Feb 22, 2023 at 11:52:25AM -0700, Uday Shankar wrote:
> static inline unsigned int blk_rq_get_max_segments(struct request *rq)
> {
> - if (req_op(rq) == REQ_OP_DISCARD)
> - return queue_max_discard_segments(rq->q);
> - return queue_max_segments(rq->q);
> + return
On Wed, Dec 21, 2022 at 04:05:06AM +, Gulam Mohamed wrote:
> +u64 blk_get_iostat_ticks(struct request_queue *q)
> +{
> + return (blk_queue_precise_io_stat(q) ? ktime_get_ns() : jiffies);
> +}
> +EXPORT_SYMBOL_GPL(blk_get_iostat_ticks);
> +
> void update_io_ticks(struct block_device
On Wed, Dec 07, 2022 at 11:17:12PM +, Chaitanya Kulkarni wrote:
> On 12/7/22 15:08, Jens Axboe wrote:
> >
> > My default peak testing runs at 122M IOPS. That's also the peak IOPS of
> > the devices combined, and with iostats disabled. If I enabled iostats,
> > then the performance drops to
On Wed, Nov 23, 2022 at 07:42:26AM -0500, Sasha Levin wrote:
> From: Keith Busch
>
> [ Upstream commit 50a893359cd2643ee1afc96eedc9e7084cab49fa ]
>
> This device mapper needs bio vectors to be sized and memory aligned to
> the logical block size. Set the minimum re
On Mon, Nov 14, 2022 at 06:31:36AM -0500, Mikulas Patocka wrote:
>
>
> On Fri, 11 Nov 2022, Keith Busch wrote:
>
> > > There are other DM targets that override logical_block_size in their
> > > .io_hints hook (writecache, ebs, zoned). Have you reasoned through
On Fri, Nov 11, 2022 at 01:07:05PM -0500, Mike Snitzer wrote:
> On Thu, Nov 10 2022 at 1:44P -0500,
> Keith Busch wrote:
>
> > From: Keith Busch
> >
> > The 6.0 kernel made some changes to the direct io interface to allow
> > offsets in user addres
From: Keith Busch
Device mappers had always been getting the default 511 dma mask, but
the underlying device might have a larger alignment requirement. Since
this value is used to determine alloweable direct-io alignment, this
needs to be a stackable limit.
Signed-off-by: Keith Busch
Reviewed
From: Keith Busch
This device mapper needs bio vectors to be sized and memory aligned to
the logical block size. Set the minimum required queue limit
accordingly.
Signed-off-by: Keith Busch
---
drivers/md/dm-log-writes.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/md/dm-log
From: Keith Busch
This device mapper needs bio vectors to be sized and memory aligned to
the logical block size. Set the minimum required queue limit
accordingly.
Link: https://lore.kernel.org/linux-block/20221101001558.648ee...@xps.demsh.org/
Fixes: b1a000d3b8ec5 ("block: relax dire
From: Keith Busch
This device mapper needs bio vectors to be sized and memory aligned to
the logical block size. Set the minimum required queue limit
accordingly.
Signed-off-by: Keith Busch
---
drivers/md/dm-integrity.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/md/dm
On Thu, Nov 10, 2022 at 06:24:03PM +, Eric Biggers wrote:
> On Thu, Nov 03, 2022 at 08:25:56AM -0700, Keith Busch wrote:
> > From: Keith Busch
> >
> > The 6.0 kernel made some changes to the direct io interface to allow
> > offsets in user addresses. This based on
From: Keith Busch
There are no external users of this function.
Signed-off-by: Keith Busch
Reviewed-by: Christoph Hellwig
---
block/blk-settings.c | 1 -
block/blk.h| 1 +
include/linux/blkdev.h | 1 -
3 files changed, 1 insertion(+), 2 deletions(-)
diff --git a/block/blk
From: Keith Busch
The 6.0 kernel made some changes to the direct io interface to allow
offsets in user addresses. This based on the hardware's capabilities
reported in the request_queue's dma_alignment attribute.
dm-crypt, -log-writes and -integrity require direct io be aligned to the
block
From: Keith Busch
The 6.0 kernel made some changes to the direct io interface to allow
offsets in user addresses. This based on the hardware's capabilities
reported in the request_queue's dma_alignment attribute.
dm-crypt requires direct io be aligned to the block size. Since it was
only ever
On Thu, Nov 03, 2022 at 12:33:19PM -0400, Mikulas Patocka wrote:
> Hi
>
> The patchset seems OK - but dm-integrity also has a limitation that the
> bio vectors must be aligned on logical block size.
>
> dm-writecache and dm-verity seem to handle unaligned bioset, but you
> should check them
From: Keith Busch
There are no external users of this function.
Signed-off-by: Keith Busch
---
block/blk-settings.c | 1 -
block/blk.h| 1 +
include/linux/blkdev.h | 1 -
3 files changed, 1 insertion(+), 2 deletions(-)
diff --git a/block/blk-settings.c b/block/blk-settings.c
From: Keith Busch
Device mappers had always been getting the default 511 dma mask, but
the underlying device might have a larger alignment requirement. Since
this value is used to determine alloweable direct-io alignment, this
needs to be a stackable limit.
Signed-off-by: Keith Busch
From: Keith Busch
This device mapper needs bio vectors to be sized and memory aligned to
the logical block size. Set the minimum required queue limit
accordingly.
Fixes: b1a000d3b8ec5 ("block: relax direct io memory alignment")
Reportred-by: Eric Biggers
Reported-by: Dmitrii Tcvetk
e it's a regression from the following kernel commit:
> >
> > commit b1a000d3b8ec582da64bb644be633e5a0beffcbf
> > Author: Keith Busch
> > Date: Fri Jun 10 12:58:29 2022 -0700
> >
> > block: relax direct io memory alignment
>
> I sug
llowing kernel commit:
>
> commit b1a000d3b8ec582da64bb644be633e5a0beffcbf
> Author: Keith Busch
> Date: Fri Jun 10 12:58:29 2022 -0700
>
> block: relax direct io memory alignment
>
> The bug is that if a dm-crypt device is set up with a crypto sect
On Wed, Nov 02, 2022 at 08:03:45PM +0300, Dmitrii Tcvetkov wrote:
>
> Applied on top 6.1-rc3, the issue still reproduces.
Yeah, I see that now. I needed to run a dm-crypt setup to figure out how
they're actually doing this, so now I have that up and running.
I think this type of usage will
On Wed, Nov 02, 2022 at 08:52:15AM -0600, Keith Busch wrote:
> [Cc'ing Dmitrii, who also reported the same issue]
>
> On Tue, Nov 01, 2022 at 08:11:15PM -0700, Eric Biggers wrote:
> > Hi,
> >
> > I happened to notice the following QEMU bug report:
> >
> &g
On Fri, Oct 28, 2022 at 11:06:29AM -0500, Mike Christie wrote:
> On 10/27/22 12:06 PM, Keith Busch wrote:
> > On Wed, Oct 26, 2022 at 06:19:34PM -0500, Mike Christie wrote:
> >> This patch moves the pr code to it's own file because I'm going to be
> >> adding more func
On Wed, Oct 26, 2022 at 06:19:36PM -0500, Mike Christie wrote:
> For Reservation Report support we need to also convert from the NVMe spec
> PR type back to the block PR definition. This moves us to an array, so in
> the next patch we can add another helper to do the conversion without
> having to
not currently used, but will be in this patchset which adds
> support for the reservation report command.
>
> Signed-off-by: Mike Christie
Looks good.
Reviewed-by: Keith Busch
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
On Thu, Oct 27, 2022 at 12:13:06PM -0500, michael.chris...@oracle.com wrote:
> Oh wait there was also a
>
> 3. The pr_types come from userspace so if it passes us 10
> and we just do:
>
> types[pr_type]
>
> then we would crash due an out of bounds error.
>
> Similarly I thought there could be
good.
Reviewed-by: Keith Busch
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
too diverse
in its responsibilities.
Reviewed-by: Keith Busch
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
On Tue, Aug 09, 2022 at 10:56:55AM +, Chaitanya Kulkarni wrote:
> On 8/8/22 17:04, Mike Christie wrote:
> > +
> > + c.common.opcode = nvme_cmd_resv_report;
> > + c.common.cdw10 = cpu_to_le32(nvme_bytes_to_numd(data_len));
> > + c.common.cdw11 = 1;
> > + *eds = true;
> > +
> > +retry:
>
On Wed, Aug 10, 2022 at 01:45:48AM +, Chaitanya Kulkarni wrote:
> On 8/9/22 09:21, Mike Christie wrote:
> > On 8/9/22 9:51 AM, Keith Busch wrote:
> >> On Tue, Aug 09, 2022 at 10:56:55AM +, Chaitanya Kulkarni wrote:
> >>> On 8/8
On Fri, Jun 03, 2022 at 01:55:34AM -0500, Mike Christie wrote:
> @@ -171,6 +171,7 @@ static const struct {
> /* zone device specific errors */
> [BLK_STS_ZONE_OPEN_RESOURCE]= { -ETOOMANYREFS, "open zones
> exceeded" },
> [BLK_STS_ZONE_ACTIVE_RESOURCE] = { -EOVERFLOW,
On Sat, Apr 09, 2022 at 06:50:40AM +0200, Christoph Hellwig wrote:
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index efb85c6d8e2d5..7e07dd69262a7 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -1607,10 +1607,8 @@ static void
On Fri, Feb 04, 2022 at 03:15:02PM +0100, Hannes Reinecke wrote:
> On 2/4/22 10:58, Chaitanya Kulkarni wrote:
>
> > and if that is the case why we don't have ZNS NVMeOF target
> > memory backed emulation ? Isn't that a bigger and more
> > complicated feature than Simple Copy where controller
On Thu, Feb 03, 2022 at 01:50:06PM -0500, Mikulas Patocka wrote:
> On Tue, 1 Feb 2022, Bart Van Assche wrote:
> > Only supporting copying between contiguous LBA ranges seems restrictive to
> > me.
> > I expect garbage collection by filesystems for UFS devices to perform better
> > if multiple LBA
On Thu, Feb 03, 2022 at 07:38:43AM -0800, Luis Chamberlain wrote:
> On Wed, Feb 02, 2022 at 08:00:12AM +, Chaitanya Kulkarni wrote:
> > Mikulas,
> >
> > On 2/1/22 10:33 AM, Mikulas Patocka wrote:
> > > External email: Use caution opening links or attachments
> > >
> > >
> > > This patch
On Tue, Feb 01, 2022 at 01:32:29PM -0500, Mikulas Patocka wrote:
> +int blkdev_issue_copy(struct block_device *bdev1, sector_t sector1,
> + struct block_device *bdev2, sector_t sector2,
> + sector_t nr_sects, sector_t *copied, gfp_t gfp_mask)
> +{
> + struct
On Thu, Nov 04, 2021 at 06:34:31PM +0100, Christoph Hellwig wrote:
> On Thu, Nov 04, 2021 at 10:32:35AM -0700, Darrick J. Wong wrote:
> > I also wonder if it would be useful (since we're already having a
> > discussion elsewhere about data integrity syscalls for pmem) to be able
> > to call this
On Wed, Nov 03, 2021 at 11:46:29PM -0700, Chaitanya Kulkarni wrote:
> +static inline blk_status_t nvme_setup_verify(struct nvme_ns *ns,
> + struct request *req, struct nvme_command *cmnd)
> +{
Due to recent driver changes, you need a "memset(cmnd, 0, sizeof(*cmnd))"
prior to setting
On Fri, Oct 29, 2021 at 09:15:43AM -0700, Bart Van Assche wrote:
> On 10/28/21 10:51 PM, Hannes Reinecke wrote:
> > Also Keith presented his work on a simple zone-based remapping block
> > device, which included an in-kernel copy offload facility.
> > Idea is to lift that as a standalone patch
On Wed, Oct 13, 2021 at 07:10:24AM +0200, Christoph Hellwig wrote:
> Use the proper helper to read the block device size.
Just IMO, this patch looks like it wants a new bdev_nr_bytes() helper
instead of using the double shifting sectors back to bytes.
--
dm-devel mailing list
dm-devel@redhat.com
clear_bit_unlock(0, >discard_page_busy);
> else
> - kfree(page_address(page) + req->special_vec.bv_offset);
> + kfree(bvec_virt(>special_vec));
> }
> }
> EXPORT_SYMBOL_GPL(nvme_cleanup_cmd);
Looks good.
Reviewed-
On Tue, Jun 29, 2021 at 09:23:18PM +0200, Martin Wilck wrote:
> On Di, 2021-06-29 at 14:59 +0200, Christoph Hellwig wrote:
> > On Mon, Jun 28, 2021 at 04:57:33PM +0200, Martin Wilck wrote:
> >
> > > The sg_io-on-multipath code needs to answer the question "is this a
> > > path or a target
On Wed, Mar 24, 2021 at 08:19:17PM +0800, Ming Lei wrote:
> +static inline void blk_create_io_context(struct request_queue *q)
> +{
> + /*
> + * Various block parts want %current->io_context, so allocate it up
> + * front rather than dealing with lots of pain to allocate it only
> +
On Sat, Feb 20, 2021 at 06:01:56PM +, David Laight wrote:
> From: SelvaKumar S
> > Sent: 19 February 2021 12:45
> >
> > This patchset tries to add support for TP4065a ("Simple Copy Command"),
> > v2020.05.04 ("Ratified")
> >
> > The Specification can be found in following link.
> >
On Fri, Dec 11, 2020 at 07:21:38PM +0530, SelvaKumar S wrote:
> +int blk_copy_emulate(struct block_device *bdev, struct blk_copy_payload
> *payload,
> + gfp_t gfp_mask)
> +{
> + struct request_queue *q = bdev_get_queue(bdev);
> + struct bio *bio;
> + void *buf = NULL;
> +
> Signed-off-by: Christoph Hellwig
Looks good.
Reviewed-by: Keith Busch
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
On Fri, Dec 04, 2020 at 11:25:12AM +, Damien Le Moal wrote:
> On 2020/12/04 20:02, SelvaKumar S wrote:
> > This patchset tries to add support for TP4065a ("Simple Copy Command"),
> > v2020.05.04 ("Ratified")
> >
> > The Specification can be found in following link.
> >
On Tue, Dec 01, 2020 at 05:54:18PM +0100, Christoph Hellwig wrote:
> diff --git a/block/blk.h b/block/blk.h
> index 98f0b1ae264120..64dc8e5a3f44cb 100644
> --- a/block/blk.h
> +++ b/block/blk.h
> @@ -99,8 +99,8 @@ static inline void blk_rq_bio_prep(struct request *rq,
> struct bio *bio,
>
On Thu, Dec 03, 2020 at 09:33:59AM -0500, Mike Snitzer wrote:
> On Wed, Dec 02 2020 at 10:26pm -0500,
> Ming Lei wrote:
>
> > I understand it isn't related with correctness, because the underlying
> > queue can split by its own chunk_sectors limit further. So is the issue
> > too many
On Sat, Sep 12, 2020 at 10:06:30PM +0800, Ming Lei wrote:
> On Fri, Sep 11, 2020 at 05:53:38PM -0400, Mike Snitzer wrote:
> > It is possible for a block device to use a non power-of-2 for chunk
> > size which results in a full-stripe size that is also a non
> > power-of-2.
> >
> > Update
On Wed, Sep 09, 2020 at 01:06:39PM -0700, Joe Perches wrote:
> diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
> index eea0f453cfb6..8aac5bc60f4c 100644
> --- a/crypto/tcrypt.c
> +++ b/crypto/tcrypt.c
> @@ -2464,7 +2464,7 @@ static int do_test(const char *alg, u32 type, u32 mask,
> int m, u32
On Fri, May 22, 2020 at 09:36:18AM -0400, Martin K. Petersen wrote:
>
> >>> + if (t->io_opt & (t->physical_block_size - 1))
> >>> + t->io_opt = lcm(t->io_opt, t->physical_block_size);
> >
> >> Any comment on this patch ? Note: the patch the patch "nvme: Fix
> >> io_opt limit setting" is
On Thu, Oct 24, 2019 at 03:50:03PM +0900, Damien Le Moal wrote:
> - /* Do a report zone to get max_lba and the same field */
> - ret = sd_zbc_do_report_zones(sdkp, buf, bufsize, 0, false);
> + /* Do a report zone to get max_lba and the size of the first zone */
> + ret =
On Mon, Nov 12, 2018 at 04:53:23PM -0500, Mike Snitzer wrote:
> On Mon, Nov 12 2018 at 11:23am -0500,
> Martin Wilck wrote:
>
> > Hello Lijie,
> >
> > On Thu, 2018-11-08 at 14:09 +0800, lijie wrote:
> > > Add support for Asynchronous Namespace Access as specified in NVMe
> > > 1.3
> > > TP
This patch provides a common decoder for block status path related errors
that may be retried so various entities wishing to consult this do not
have to duplicate this decision.
Acked-by: Mike Snitzer <snit...@redhat.com>
Reviewed-by: Hannes Reinecke <h...@suse.com>
Signed-off-by:
Uses common code for determining if an error should be retried on
alternate path.
Acked-by: Mike Snitzer <snit...@redhat.com>
Reviewed-by: Hannes Reinecke <h...@suse.com>
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
drivers/md/dm-mpath.c | 19 ++-
Hannes Reinecke <h...@suse.com>
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
drivers/nvme/host/core.c | 9 +
drivers/nvme/host/multipath.c | 44 ---
drivers/nvme/host/nvme.h | 5 +++--
3 files changed, 16 insertion
nel
doc for it.
Added reviews and acks.
Keith Busch (5):
nvme: Add more command status translation
nvme/multipath: Consult blk_status_t for failover
block: Provide blk_status_t decoding for path errors
nvme/multipath: Use blk_path_error
dm mpath: Use blk_path_error
drivers/md/dm-mpat
This adds more NVMe status code translations to blk_status_t values,
and captures all the current status codes NVMe multipath uses.
Acked-by: Mike Snitzer <snit...@redhat.com>
Reviewed-by: Hannes Reinecke <h...@suse.com>
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
On Mon, Jan 08, 2018 at 01:57:07AM -0800, Christoph Hellwig wrote:
> > - if (unlikely(nvme_req(req)->status && nvme_req_needs_retry(req))) {
> > - if (nvme_req_needs_failover(req)) {
> > + blk_status_t status = nvme_error_status(req);
> > +
> > + if (unlikely(status != BLK_STS_OK
On Mon, Jan 08, 2018 at 04:34:36PM +0100, Christoph Hellwig wrote:
> It's basically a kernel bug as it tries to access lbas that do not
> exist. BLK_STS_TARGET should be fine.
Okay, I'll fix this and address your other comments, and resend. Thanks
for the feedback.
--
dm-devel mailing list
On Thu, Jan 04, 2018 at 06:36:27PM -0500, Mike Snitzer wrote:
> Right, I dropped that patch since it'd have only resulted in conflicts
> come merge time. As such, this series can easily go through the nvme
> tree to Jens.
It looks like you can also touch up dm to allow it to multipath nvme
even
Uses common code for determining if an error should be retried on
alternate path.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
drivers/md/dm-mpath.c | 19 ++-
1 file changed, 2 insertions(+), 17 deletions(-)
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-m
Uses common code for determining if an error should be retried on
alternate path.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
drivers/nvme/host/multipath.c | 14 +-
1 file changed, 1 insertion(+), 13 deletions(-)
diff --git a/drivers/nvme/host/multipath.c b/driver
This adds more NVMe status code translations to blk_status_t values,
and captures all the current status codes NVMe multipath uses.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
drivers/nvme/host/core.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/drivers/nvme/host/co
This patch provides a common decoder for block status that may be retried
so various entities wishing to consult this do not have to duplicate
this decision.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
include/linux/blk_types.h | 16
1 file changed, 16 inse
This removes nvme multipath's specific status decoding to see if failover
is needed, using the generic blk_status_t that was translated earlier. This
abstraction from the raw NVMe status means nvme status decoding exists
in just one place.
Signed-off-by: Keith Busch <keith.bu...@intel.
-4.16 without that patch, as I'm not seeing it in the most
current branch.
Keith Busch (5):
nvme: Add more command status translation
nvme/multipath: Consult blk_status_t for failover
block: Provide blk_status_t decoding for retryable errors
nvme/multipath: Use blk_retryable
dm mpath: Use
Instead of hiding NVMe path related errors, the NVMe driver needs to
code an appropriate generic block status from an NVMe status.
We already do this translation whether or not CONFIG_NVME_MULTIPATHING is
set, so I think it's silly NVMe native multipathing has a second status
decoder. This just
On Tue, Dec 19, 2017 at 04:05:41PM -0500, Mike Snitzer wrote:
> These patches enable DM multipath to work well on NVMe over Fabrics
> devices. Currently that implies CONFIG_NVME_MULTIPATH is _not_ set.
>
> But follow-on work will be to make it so that native NVMe multipath
> and DM multipath can
On Tue, Dec 12, 2017 at 01:35:13PM +0200, Nikolay Borisov wrote:
> On 11.12.2017 18:00, Scott Bauer wrote:
> > +As an example:
> > +
> > +Intel NVMe drives contain two cores on the physical device.
> > +Each core of the drive has segregated access to its LBA range.
> > +The current
On Mon, Dec 11, 2017 at 09:00:19AM -0700, Scott Bauer wrote:
> +Example scripts:
> +
> +
> +dmsetup create nvmset1 --table '0 1 dm-unstripe /dev/nvme0n1 1 2 0'
> +dmsetup create nvmset0 --table '0 1 dm-unstripe /dev/nvme0n1 0 2 0'
> +
> +There will now be two mappers:
>
you want, and tests
successfully on my synthetic workloads.
Acked-by: Keith Busch <keith.bu...@intel.com>
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
On Tue, May 09, 2017 at 03:19:31PM +0530, Neeraj Soni wrote:
>+ Alasdair and dm-devel for awareness and inputs.
>
>On 5/9/2017 12:26 PM, Neeraj Soni wrote:
>
> Hi Keith/Snitzer,
>
> I have recently started using kernel 4.4 on a Android device and ran
> Androbench to check
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
v1->v2:
Removed explicitly setting the wwid path information. We get that with
through exported udev attributes.
Added default retain_hwhandler to off for NVME devices. This has the
kernel not call into scsi specific APIs t
On Mon, Feb 20, 2017 at 11:57:59AM -0600, Benjamin Marzinski wrote:
> > +
> > + snprintf(pp->vendor_id, SCSI_VENDOR_SIZE, "NVME");
> > + snprintf(pp->product_id, SCSI_PRODUCT_SIZE, "%s",
> > udev_device_get_sysattr_value(nvme, "model"));
> > + snprintf(pp->serial, SERIAL_SIZE, "%s",
> >
On Thu, Feb 16, 2017 at 01:21:29PM -0500, Mike Snitzer wrote:
> Then undeprecate them. Decisions like marking a path checker deprecated
> were _not_ made with NVMe in mind. They must predate NVMe.
>
> multipath-tools has tables that specify all the defaults for a given
> target backend. NVMe
On Thu, Feb 16, 2017 at 05:37:41PM +, Bart Van Assche wrote:
> On Thu, 2017-02-16 at 12:38 -0500, Keith Busch wrote:
> > Maybe I'm not seeing the bigger picture. Is there some part to multipath
> > that the kernel is not in a better position to handle?
>
> Does th
On Thu, Feb 16, 2017 at 10:13:37AM -0500, Mike Snitzer wrote:
> On Thu, Feb 16 2017 at 9:26am -0500,
> Christoph Hellwig wrote:
>
> > just a little new code in the block layer, and a move of the path
> > selectors from dm to the block layer. I would not call this
> >
On Wed, Feb 15, 2017 at 06:57:21AM -0800, Christoph Hellwig wrote:
> On Tue, Feb 14, 2017 at 06:00:23PM -0500, Keith Busch wrote:
> > Good point. I was unknowingly running with CONFIG_SCSI_DH disabled,
> > and blissfully unaware of its existence! After enabling that option,
> >
On Tue, Feb 14, 2017 at 01:35:45PM -0800, Bart Van Assche wrote:
> On 02/14/2017 01:19 PM, Keith Busch wrote:
> > These devices are mulitpath capable, and have been able to stack with
> > dm-mpath since kernel 4.2.
> >
> > - str = STRDUP("^(ram|raw|loop|fd|md
On Thu, Jun 30, 2016 at 06:52:07PM -0400, Mike Snitzer wrote:
> AFAIK, hch had Intel disable that by default in the hopes of avoiding
> people having dm-multipath "just work" with NVMeoF. (Makes me wonder
> what other unpleasant unilateral decisions were made because some
> non-existant NVMe
86 matches
Mail list logo