For kref_get_unless_zero to protect against lookup vs free races we need
to use it in all places where we aren't guaranteed to already hold a
reference. There is no such guarantee in nvme_find_get_ns, so switch to
kref_get_unless_zero in this function.
Signed-off-by: Christoph Hellwig &l
device is called.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/nvme/host/core.c | 76 ++--
drivers/nvme/host/nvme.h | 3 +-
2 files changed, 18 insertions(+), 61 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/
do the unless_zero variant there.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/nvme/host/core.c | 43 ++-
drivers/nvme/host/fc.c | 8 ++--
drivers/nvme/host/nvme.h | 12 +++-
drivers/nvme/host/pci.c| 2 +-
drivers/nvme/ho
With this flag a driver can create a gendisk that can be used for I/O
submission inside the kernel, but which is not registered as user
facing block device. This will be useful for the NVMe multipath
implementation.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/genhd.c
into the struct device.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/genhd.c | 4
include/linux/genhd.h | 2 +-
2 files changed, 1 insertion(+), 5 deletions(-)
diff --git a/block/genhd.c b/block/genhd.c
index dd305c65ffb0..1174d24e405e 100644
--- a/block/genhd.c
+++ b/block/g
Switch to the ida_simple_* helpers instead of opencoding them.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/nvme/host/core.c | 40 +++-
1 file changed, 7 insertions(+), 33 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvm
Now that we are protected against lookup vs free races for the namespace
by using kref_get_unless_zero we don't need the hack of NULLing out the
disk private data during removal.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/nvme/host/core.
Set aside a bit in the request/bio flags for driver use.
Signed-off-by: Christoph Hellwig <h...@lst.de>
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
---
include/linux/blk_types.h | 5 +
1 file changed, 5 insertions(+)
diff --git a/include/linux/blk_types.h b/include/linux/blk_t
Hi all,
this series adds support for multipathing, that is accessing nvme
namespaces through multiple controllers to the nvme core driver.
It is a very thin and efficient implementation that relies on
close cooperation with other bits of the nvme driver, and few small
and simple block helpers.
This flag should be before the operation-specific REQ_NOUNMAP bit.
Signed-off-by: Christoph Hellwig <h...@lst.de>
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
---
include/linux/blk_types.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/b
On Wed, Oct 11, 2017 at 03:04:14PM +0300, Sagi Grimberg wrote:
>
>> +/*
>> + * Anchor structure for namespaces. There is one for each namespace in a
>> + * NVMe subsystem that any of our controllers can see, and the namespace
>> + * structure for each controller is chained of it. For private
>> static void nvme_free_ctrl(struct kref *kref)
>> {
>> struct nvme_ctrl *ctrl = container_of(kref, struct nvme_ctrl, kref);
>> +struct nvme_subsystem *subsys = ctrl->subsys;
>> put_device(ctrl->device);
>> nvme_release_instance(ctrl);
>> ida_destroy(>ns_ida);
>> +
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Looks good,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Jens, are you fine with picking this up throught the nvme tree?
I think Sagi's later changes rely on it, so that would make life
a littler easier.
On Fri, Oct 06, 2017 at 02:31:20PM +0200, Ilya Dryomov wrote:
> This would unconditionally overwrite any WRITE ZEROS error. If we get
> e.g. -EIO, and manual zeroing is not allowed, I don't think we want to
> return -EOPNOTSUPP?
>
> Returning -EOPNOTSUPP to mean "can't zero using either method"
On Fri, Oct 13, 2017 at 02:45:51PM +0200, Matias Bjørling wrote:
> From: Rakesh Pandit
>
> When a virtual block device is formatted and mounted after creating
> with "nvme lnvm create... -t pblk", a removal from "nvm lnvm remove"
> would result in this:
>
> 446416.309757]
On Fri, Oct 06, 2017 at 02:01:46PM +0200, Javier González wrote:
> I think it is good to fail fast as any other nvme I/O command and then
> recover in pblk if necessary.
Note that we only do it for other nvme _passthrough_ commands - the
actual I/O commands dot not get the failfast flag.
On Thu, Oct 05, 2017 at 09:32:33PM +0200, Ilya Dryomov wrote:
> This is to avoid returning -EREMOTEIO in the following case: device
> doesn't support WRITE SAME but scsi_disk::max_ws_blocks != 0, zeroout
> is called with BLKDEV_ZERO_NOFALLBACK. Enter blkdev_issue_zeroout(),
>
On Fri, Oct 06, 2017 at 11:19:09AM +0200, Javier González wrote:
> on the lightnvm I/O path and that has propagated through the code as we
> added more functionality. Can you explain why this is necessary? If I
> can just remove it, it is much easier to do the cleanup.
>
> I have tested on or HW
d72b5fb1219fc74625b0380930f9c580df Mon Sep 17 00:00:00 2001
From: Christoph Hellwig <h...@lst.de>
Date: Fri, 6 Oct 2017 10:18:53 +0200
Subject: mm: move all laptop_mode handling to backing-dev.c
It isn't block-device specific and oddly spread over multiple files
at the moment:
TODO: audit
We already have a queue_is_rq_based helper to check if a request_queue
is request based, so we can remove the flag for it.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/blk-mq-debugfs.c | 1 -
block/elevator.c | 2 +-
drivers/md/dm-rq.c | 2 +-
drivers/md/dm-table.c
ES failing
> and fall back to manually zeroing, unless BLKDEV_ZERO_NOFALLBACK is
> specified. For BLKDEV_ZERO_NOFALLBACK case, return -EOPNOTSUPP if
> sd_done() has just set ->no_write_same thus indicating lack of offload
> support.
>
> Fixes: c20cfc27a473 ("block: stop using
Looks good,
Reviewed-by: Christoph Hellwig <h...@lst.de>
:
nvme-pci: Use PCI bus address for data/queues in CMB (2017-10-04 11:42:53
+0200)
Christoph Hellwig (1):
nvme-pci: Use PCI bus address for data/queues in CMB
Martin Wilck (1):
nvme: fix visibility of "uuid" ns
Does the patch below fix the warning for you?
--
>From 28aae7104425433d39e6142adcd5b88dc5b0ad5f Mon Sep 17 00:00:00 2001
From: Christoph Hellwig <h...@lst.de>
Date: Thu, 5 Oct 2017 18:31:02 +0200
Subject: block: use DECLARE_COMPLETION_ONSTACK in submit_bio_wait
This way we get ou
On Thu, Oct 05, 2017 at 05:06:54PM +0100, John Garry wrote:
> It's a HiSilicon hip07 (D05) platform. For this platform, the integrated
> SAS controller is a platform device. This controller supports 16 hw queues.
That's v1 or v2 in drivers/scsi/hisi_sas?
Seems like you need to implement the
On Thu, Oct 05, 2017 at 02:28:43PM +0100, John Garry wrote:
> I know that we can add our own per-driver mapping function to solve, but I
> would expect that that generic mapper would cover a generic platform.
Why aren't you using blk_mq_pci_map_queues?
What kind of hardware are we talking
On Wed, Oct 04, 2017 at 09:18:11AM +0200, Johannes Thumshirn wrote:
> Wouldn't it make sense to put the ->release() method into bsg_ops as
> well? The current prototype of bsg_register_queue isn't exactly what I
> would call a sane API.
It's a different level of callback - ops are the type of
> + /*
> + * Ensure that the effect of blk_set_preempt_only() is globally
> + * visible before unfreezing the queue.
> + */
> + if (err == 0)
> + synchronize_rcu();
I don't understand why we'd need this. The flag is set both under
a spinlock and a mutex that
> + /*
> + * Do not attempt to freeze the queue of an already quiesced device
> + * because that could result in a deadlock.
> + */
> + freeze = sdev->sdev_state == SDEV_RUNNING;
> + if (freeze)
> + blk_mq_freeze_queue(q);
> err =
> +EXPORT_SYMBOL(blk_set_preempt_only);
EXPORT_SYMBOL_GPL please.
Except for that this looks fine:
Reviewed-by: Christoph Hellwig <h...@lst.de>
So as pointed out in the last run (after changing my mind deeper into
the series) I think we should instead use a BLK_MQ_REQ_PREEMPT flag.
The preempt only makes sense at the request level, not for file system
requests. For the legacy case we can add blk_get_request_flags that
takes the
Bart, Ming:
can you guys please work a little better together? We've now got two
patchsets that are getting very similar.
Bart, please at least CC Ming when you send out the patches.
Ming - instead of sending a separate series right after Bart a
differential series would be nice. This also
On Tue, Oct 03, 2017 at 06:22:23PM +0200, Paolo Bonzini wrote:
> On 21/09/2017 16:49, Paolo Bonzini wrote:
> > After the first few months, the message has not led to many bug reports.
> > It's been almost five years now, and in practice the main source of
> > it seems to be MTIOCGET that someone
Users of the bsg-lib interface should only use the bsg_job data structure
and not know about implementation details of it.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/bsg-lib.c | 14 ++
include/linux/bsg-lib.h | 1 -
2 files changed, 6 insertions(+), 9 del
scsi_request anymore.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/bsg-lib.c | 158 +++
block/bsg.c | 257 +-
drivers/scsi/scsi_lib.c | 4 +-
drivers/scsi/scsi_s
Use the obvious calling convention.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/bsg.c| 18 --
block/scsi_ioctl.c | 8
drivers/scsi/sg.c | 2 +-
include/linux/blkdev.h | 2 +-
4 files changed, 14 insertions(+), 16 deletions(-)
Hi all,
this series cleans up various abuses of the bsg interfaces, and then
splits bsg for SCSI passthrough from bsg for arbitrary transport
passthrough. This removes the scsi_request abuse in bsg-lib that is
very confusing, and also makes sure we can sanity check the requests
we get. The
hat was previously done in bsg_init_rq(), and will
also do it when the request is taken from the emergency-pool of the
backing mempool.
Fixes: 50b4d485528d ("bsg-lib: fix kernel panic resulting from missing
allocation of reply-buffer")
Cc: <sta...@vger.kernel.org> # 4.11+
bsg_job_done takes care of updating the scsi_request structure fields.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/scsi/libfc/fc_lport.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/scsi/libfc/fc_lport.c b/drivers/scsi/libfc/fc_lport.c
index 2fd0ec
We already support 256 or more segments as long as the architecture
supports SG chaining (all the ones that matter do), so removed the
weird playing with limits from the job handler.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/scsi/bfa/bfad_bsg.c | 7 ---
1 file chan
Always use bsg_job->reply instead of scsi_req(bsg_job->req)->sense), as
they always point to the same memory.
Never set scsi_req(bsg_job->req)->result and we'll set that value through
bsg_job_done.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/scsi/q
As a user of bsg-lib the SAS transport should not poke into request
internals but use the bsg_job fields instead.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/scsi/scsi_transport_sas.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/driver
The zfcp driver wants to know the timeout for a bsg job, so add a field
to struct bsg_job for it in preparation of not exposing the request
to the bsg-lib users.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/bsg-lib.c | 1 +
drivers/s390/scsi/zfcp_fc.c | 4 ++--
i
This looks generally good to me, but I really worry about the impact
on very high iops devices. Did you try this e.g. for random reads
from unallocated blocks on an enterprise NVMe SSD?
> +bool blk_mq_sched_try_merge(struct request_queue *q, struct bio *bio,
> + struct request **merged_request)
> +{
> + struct request *rq;
> + enum elv_merge type = elv_merge(q, , bio);
> +
> + return __blk_mq_try_merge(q, bio, merged_request, rq, type);
> +}
>
This looks good in general:
Reviewed-by: Christoph Hellwig <h...@lst.de>
Minor nitpicks below:
> const bool has_sched_dispatch = e && e->type->ops.mq.dispatch_request;
This is now only tested once, so you can remove the local variable
for it.
> + /*
On Sat, Sep 30, 2017 at 07:26:51PM +0800, Ming Lei wrote:
> So that we can reuse __elv_merge() to merge bio
> into requests from sw queue in the following patches.
>
> No functional change.
There are very few callers of elv_merge, just update the function
with the new parameters instead of
Can yu get rid of the elv_rqhash in favour of these new ones
instead of having that tinty indirection?
Also the new helper should have some prefix - either keep elv_
or add blk_.
On Sat, Sep 30, 2017 at 06:27:19PM +0800, Ming Lei wrote:
> SCSI devices use host-wide tagset, and the shared
> driver tag space is often quite big. Meantime
> there is also queue depth for each lun(.cmd_per_lun),
> which is often small.
>
> So lots of requests may stay in sw queue, and we
>
On Sat, Sep 30, 2017 at 06:27:17PM +0800, Ming Lei wrote:
> This function is introduced for dequeuing request
> from sw queue so that we can dispatch it in
> scheduler's way.
>
> More importantly, some SCSI devices may set
> q->queue_depth, which is a per-request_queue limit,
> and applied on
This patch does two many things at once and needs a split. I also
don't really understand why it's in this series and not your dm-mpath
performance one.
> +static void blk_mq_request_direct_insert(struct blk_mq_hw_ctx *hctx,
> + struct request *rq)
> +{
> +
We need to look for an active PM request until the next softbarrier
instead of looking for the first non-PM request. Otherwise any cause
of request reordering might starve the PM request(s).
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/blk-core.
Hi Jens,
this series fixed the blocking/passing of requests during PM IFF there were
more than one. That currently isn't the case, but relying on that not only
is fragile, but also leads to more obscure code than handling it properly.
lly zeroing, unless BLKDEV_ZERO_NOFALLBACK is
> specified. For BLKDEV_ZERO_NOFALLBACK case, return -EOPNOTSUPP if
> sd_done() has just set ->no_write_same thus indicating lack of offload
> support.
>
> Fixes: c20cfc27a473 ("block: stop using blkdev_issue_write_same for zeroing")
> Cc
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Looks good:
Reviewed-by: Christoph Hellwig <h...@lst.de>
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Can you move this to the beginning of your series, just after
the other edits to blk_mq_sched_dispatch_requests?
> +static void blk_mq_do_dispatch_sched(struct request_queue *q,
> + struct elevator_queue *e,
> + struct
et rid of the blk_set_preempt_only
helper this looks good:
Reviewed-by: Christoph Hellwig <h...@lst.de>
> + /*
> + * Simply quiesing SCSI device isn't safe, it is easy
> + * to use up requests because all these allocated requests
> + * can't be dispatched when device is put in QIUESCE.
> + * Then no request can be allocated and we may hang
> + * somewhere, such as system
> +void blk_set_preempt_only(struct request_queue *q, bool preempt_only)
> +{
> + blk_mq_freeze_queue(q);
> + if (preempt_only)
> + queue_flag_set_unlocked(QUEUE_FLAG_PREEMPT_ONLY, q);
> + else
> + queue_flag_clear_unlocked(QUEUE_FLAG_PREEMPT_ONLY, q);
> +
> +void blk_set_preempt_only(struct request_queue *q, bool preempt_only)
> +{
> + unsigned long flags;
> +
> + spin_lock_irqsave(q->queue_lock, flags);
> + if (preempt_only)
> + queue_flag_set(QUEUE_FLAG_PREEMPT_ONLY, q);
> + else
> +
As mentioned in the reply to Bart, I'd much rather use BLK_MQ_REQ_*
for the sane blk_get_request version (which should have the same
prototype as blk_mq_alloc_request) instead of having another flags
namespace.
On Mon, Oct 02, 2017 at 03:42:44PM +0200, Christoph Hellwig wrote:
> This looks ok to me, or at least better than the version from Ming to
> archive the same. I kinda hate to add more REQ_* flags than
> really nessecary though. Maybe instead of the mapping to REQ_* as
> sugge
This looks ok to me, or at least better than the version from Ming to
archive the same. I kinda hate to add more REQ_* flags than
really nessecary though. Maybe instead of the mapping to REQ_* as
suggested to ming blk_queue_enter should instead take the BLK_MQ_REQ_*
flags and we'll add
On Sat, Sep 30, 2017 at 02:12:11PM +0800, Ming Lei wrote:
> We need to pass PREEMPT flags to blk_queue_enter()
> for allocating request with RQF_PREEMPT in the
> following patch.
I don't like having another name space for flags. It seems like
we should simply pass ops on, and then map the nowait
I think I already gate it to basically the same patch as queued
up by Bart, but here again:
Reviewed-by: Christoph Hellwig <h...@lst.de>
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
Independent of whether this should be required to make scsi quience
safe or not making the dm thread freeze aware is the right thing
to do, and this patch looks correct to me:
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Sat, Sep 30, 2017 at 10:06:45AM +0200, Jens Axboe wrote:
> For some reason, the laptop mode IO completion notified was never wired
> up for blk-mq. Ensure that we trigger the callback appropriately, to arm
> the laptop mode flush timer.
Looks fine:
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Fri, Sep 29, 2017 at 10:21:53PM +0800, Tony Yang wrote:
> Hi, All
>
> Because my environment requirements, the kernel must use 4.8.17,
> I would like to ask, how to use the kernel 4.8.17 nvme multi-path?
> Because I see support for multi-path versions are above 4.13
In that case we
xit_job;
> + q->initialize_rq_fn = bsg_init_rq;
Please use function names that match the method names, that is keep
the existing names and name the new helper bsg_initialize_rq;
Except for that the patch looks fine to me:
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Mon, Sep 25, 2017 at 04:05:17PM +0200, Hannes Reinecke wrote:
> On 09/25/2017 03:50 PM, Christoph Hellwig wrote:
> > On Mon, Sep 25, 2017 at 03:47:43PM +0200, Hannes Reinecke wrote:
> >> Can't we make the multipath support invisible to the host?
> >> IE check t
On Mon, Sep 25, 2017 at 03:47:43PM +0200, Hannes Reinecke wrote:
> Can't we make the multipath support invisible to the host?
> IE check the shared namespaces before creating the device node, and just
> move them under the existing namespaces if one exists?
That was what my first version did, but
Set aside a bit in the request/bio flags for driver use.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
include/linux/blk_types.h | 5 +
1 file changed, 5 insertions(+)
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index acc2f3cdc2fc..7ec2ed097a8a
Hi all,
this series adds support for multipathing, that is accessing nvme
namespaces through multiple controllers to the nvme core driver.
It is a very thin and efficient implementation that relies on
close cooperation with other bits of the nvme driver, and few small
and simple block helpers.
snip --
Note that these create the new persistent names. Overriding the existing
nvme ones would be nicer, but while that works for the first path, the
normal rule will override it again for each subsequent path.
Signed-off-by: Christoph Hellwig <h...@lst.
This helpers allows to bounce steal the uncompleted bios from a request so
that they can be reissued on another path.
Signed-off-by: Christoph Hellwig <h...@lst.de>
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
---
block/blk-core.c | 20
include/linux/b
We do this by adding a helper that returns the ns_head for a device that
can belong to either the per-controller or per-subsystem block device
nodes, and otherwise reuse all the existing code.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/nvme/host/core.
This allows us to manage the various uniqueue namespace identifiers
together instead needing various variables and arguments.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
drivers/nvme/host/core.c | 69 +++-
drivers/nvme/host/nvme.
-by: Christoph Hellwig <h...@lst.de>
---
drivers/nvme/host/core.c | 192 +--
drivers/nvme/host/lightnvm.c | 14 ++--
drivers/nvme/host/nvme.h | 21 -
3 files changed, 192 insertions(+), 35 deletions(-)
diff --git a/drivers/nvme/host/core.c b/d
This adds a new nvme_subsystem structure so that we can track multiple
controllers that belong to a single subsystem. For now we only use it
to store the NQN, and to check that we don't have duplicate NQNs unless
the involved subsystems support multiple controllers.
Signed-off-by: Christoph
This flag should be before the operation-specific REQ_NOUNMAP bit.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
include/linux/blk_types.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index a2d2aa
This helper allows reinserting a bio into a new queue without much
overhead, but requires all queue limits to be the same for the upper
and lower queues, and it does not provide any recursion preventions.
Signed-off-by: Christoph Hellwig <h...@lst.de>
---
block/blk-core.c
On Thu, Sep 21, 2017 at 11:16:31AM -0400, Keith Busch wrote:
> If there weren't resistence to renaming structs, it would be more
> aligned to how the specification calls these if we rename nvme_ns to
> nvme_ns_path, and what you're calling nvme_ns_head is should just be
> the nvme_ns.
Then we'd
> +static struct request *
> +deadline_next_request(struct deadline_data *dd, int data_dir)
> +{
if (WARN_ON_ONCE(data_dir != READ && data_dir != WRITE))
return NULL;
> + return dd->next_rq[data_dir];
> +}
> +
Else looks fine to me:
Revi
On Sun, Sep 24, 2017 at 04:02:42PM +0900, Damien Le Moal wrote:
> Zone write locking is mandatory for host managed zoned block devices so
> that sequential write order can be maintained. This is however optional
> for host aware devices as the device firmware can handle random writes.
>
> The
> + if (q->seq_zones && test_bit(zno, q->seq_zones))
> + return BLKPREP_OK;
Isn't the check above inverted? Also shouldn't it use blk_rq_zone_is_seq?
E.g.
if (!blk_rq_zone_is_seq(cmd->request))
return BLKPREP_OK;
?
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
> +
> + struct request_queue *q;
Do you really need the queue backpointer? At least as far as this
patch is concerned we could just pass the queue on to
deadline_enable_zones_wlock and be fine. And in general we should
always passing the q, as we can trivial go from queue to deadline_data
> +static inline unsigned long *sd_zbc_alloc_zone_bitmap(struct scsi_disk *sdkp)
> +{
> + struct request_queue *q = sdkp->disk->queue;
> +
> + return kzalloc_node(BITS_TO_LONGS(sdkp->nr_zones)
> + * sizeof(unsigned long),
> + GFP_KERNEL,
> Documentation/block/zoned-iosched.txt | 48 ++
> block/Kconfig.iosched | 12 +
> block/Makefile| 1 +
> block/blk-mq-debugfs.h| 14 +-
> block/blk-mq-sched.h | 11 +-
> block/zoned-iosched.c | 925
>
Looks fine,
Reviewed-by: Christoph Hellwig <h...@lst.de>
On Fri, Sep 22, 2017 at 11:09:16AM -0400, Keith Busch wrote:
> On Mon, Sep 18, 2017 at 04:14:53PM -0700, Christoph Hellwig wrote:
> > +static void nvme_failover_req(struct request *req)
> > +{
> > + struct nvme_ns *ns = req->q->queuedata;
On Thu, Sep 21, 2017 at 05:12:46PM -0400, Keith Busch wrote:
> BTW, considered persistent nameing rules to symlink these from
> /dev/disk/by-id/? May need to add an attribute to the multipath object
> to assist that.
Yes, we do. I've just been trying to push out udev/systemd work
as long as
On Fri, Sep 22, 2017 at 08:21:09AM +0800, Tony Yang wrote:
> Excuse me,
>ask an junior question, how can you complete the nvme mpath package
> clone, I use git clone, after completion, found no nvme directory.
>
>
> [root@scst1 soft]# git clone
> git://git.infradead.org/users/hch/block.git
On Fri, Sep 22, 2017 at 05:18:49PM -1000, Linus Torvalds wrote:
> WTF? Why is this so hard? It used to be that IDE drove people crazy.
> Now it's NVMe and generic block layer stuff.
Can you please explain what problems you have with the nvme patches?
They are tiny and fix either user reported
901 - 1000 of 2477 matches
Mail list logo