No more need in a blk-mq world where the scsi command and request
are allocated together.
Signed-off-by: Christoph Hellwig
---
drivers/scsi/qedf/qedf_io.c | 6 --
drivers/scsi/qedi/qedi_fw.c | 7 ---
drivers/scsi/scsi_lib.c | 3 ---
drivers/scsi/sr.c | 1 -
4 files changed
For some reason patch 5 didn't make it to my inbox, but assuming
nothing has changed this whole series looks good to me now.
On Fri, Feb 01, 2019 at 05:03:40PM +0100, Heinz Mauelshagen wrote:
> On 2/1/19 3:09 PM, John Dorminy wrote:
> > I didn't know such a thing existed... does it work on any block
> > device? Where do I read more about this?
>
>
> Use sg_write_same(8) from package sg3_utils.
>
> For instance 'sg_wri
FYI, this needs the following fold, as Bart added another reference
to ->special past the branch point for my tree:
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 4fbb8310e268..a1e43e77ceef 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -1171,8 +1171,6 @@ static blk_status_t s
On Mon, Feb 04, 2019 at 07:24:43AM +0100, Ulf Hansson wrote:
> > The above results in multiple hung tasks that lead to failures to boot.
> >
> > Switching complete_work to the system_highpri queue avoids this
> > because system_highpri is not flagged with WQ_MEM_RECLAIM. This allows
> > the host to
On Mon, Feb 04, 2019 at 01:30:37PM +0100, Ulf Hansson wrote:
> Although, I am not sure why having our own mmc workqueue, would fix
> this problem. Couldn't we hit the same kind of deadlock anyways you
> think?
Maybe I misunderstood the issue. I thought the problem was that
the one rescuer kblockd
On Mon, Feb 04, 2019 at 10:02:18AM +0200, Adrian Hunter wrote:
> On 4/02/19 8:24 AM, Ulf Hansson wrote:
> > + Jens, Christoph, Adrian, Linus
> >
> > On Thu, 31 Jan 2019 at 21:16, Zachary Hays wrote:
> >>
> >> The kblockd workqueue is created with the WQ_MEM_RECLAIM flag set.
> >> This generates a
On Wed, Jan 30, 2019 at 10:44:24AM +0100, Hannes Reinecke wrote:
> The 'response' buffer from bsg is mapped onto the SCSI sense buffer,
> however after commit 82ed4db499b8 we need to allocate them ourselves
> as the bsg queue is _not_ a SCSI queue, and hence the sense buffer
> won't be allocated fr
On Fri, Feb 01, 2019 at 09:50:08PM +0100, David Kozub wrote:
> This should make no change in functionality.
> The formatting changes were triggered by checkpatch.pl.
>
> Signed-off-by: David Kozub
> Reviewed-by: Scott Bauer
> ---
> block/sed-opal.c | 19 +++
> 1 file changed, 11
t; Reviewed-by: Scott Bauer
Looks good,
Reviewed-by: Christoph Hellwig
Dave, when you resubmit other peoples patches you should also add
your Signed-off-by: line to the end of the signoff chain.
id add_token_u8(int *err, struct opal_dev *cmd, u8 tok)
> +static size_t remaining_size(struct opal_dev *cmd)
> +{
> + return IO_BUFFER_LENGTH - cmd->pos;
> +}
This function seem a little pointless to me, at least as of this patch
where it only has a single user just below.
Otherwise this looks good to me:
Reviewed-by: Christoph Hellwig
> Signed-off-by: David Kozub
> Signed-off-by: Jonas Rabenstein
> Reviewed-by: Scott Bauer
Looks good,
Reviewed-by: Christoph Hellwig
Looks good,
Reviewed-by: Christoph Hellwig
> Co-authored-by: Jonas Rabenstein
> Signed-off-by: David Kozub
> Signed-off-by: Jonas Rabenstein
> Reviewed-by: Scott Bauer
Looks good,
Reviewed-by: Christoph Hellwig
er functions.
>
> Co-authored-by: Jonas Rabenstein
> Signed-off-by: David Kozub
> Signed-off-by: Jonas Rabenstein
> Reviewed-by: Scott Bauer
Looks good,
Reviewed-by: Christoph Hellwig
On Fri, Feb 01, 2019 at 09:50:15PM +0100, David Kozub wrote:
> From: Jonas Rabenstein
>
> Add function address (and if available its symbol) to the message if a
> step function fails.
>
> Signed-off-by: Jonas Rabenstein
> Reviewed-by: Scott Bauer
Looks good:
Reviewed-by: Christoph Hellwig
> + start = &cmd->cmd[cmd->pos];
> + return start;
No need for the local start variable here, just return the computed
address directly.
Otherwise looks good:
Reviewed-by: Christoph Hellwig
On Fri, Feb 01, 2019 at 09:50:17PM +0100, David Kozub wrote:
> From: Jonas Rabenstein
>
> Enable users to mark the shadow mbr as done without completely
> deactivating the shadow mbr feature. This may be useful on reboots,
> when the power to the disk is not disconnected in between and the shadow
the refactoring,
and probably should be a patch of its own.
> - /* 0x08 is Manufacured Inactive */
> + /* 0x08 is Manufactured Inactive */
Shouldn't this go into the typo fixes patch at the beginning of
the series?
Otherwise looks fine:
Reviewed-by: Christoph Hellwig
ott Bauer
Looks good,
Reviewed-by: Christoph Hellwig
if (error)
goto out_error;
}
return 0;
out_error:
if (state > 1)
end_opal_session_error(dev);
return error;
Otherwise looks good:
Reviewed-by: Christoph Hellwig
ed-by: Scott Bauer
Looks good,
Reviewed-by: Christoph Hellwig
On Fri, Feb 01, 2019 at 09:50:07PM +0100, David Kozub wrote:
> This patch series extends SED OPAL support: it adds IOCTL for setting the
> shadow
> MBR done flag which can be useful for unlocking an OPAL disk on boot and it
> adds
> IOCTL for writing to the shadow MBR. Also included are some mino
Hi Jens,
below is our current (small) queue of NVMe patches for Linux 5.1. We
want the re-addition of the Write Zeroes support to be in linu-next for
a few weeks as it caused some problems last time. The only other
patch is a cleanup from Sagi.
The following changes since commit bb94aea1444b985
On Mon, Feb 04, 2019 at 04:37:46PM +0100, Hannes Reinecke wrote:
> static int bsg_scsi_complete_rq(struct request *rq, struct sg_io_v4 *hdr)
So this is bsg_scsi_ops that you quote.
> This expects the 'response' to be allocated.
> Yet nowhere in the block/bsg.c we actually _do_ allocate the 'respo
On Mon, Feb 04, 2019 at 10:36:10AM -0500, Scott Bauer wrote:
> > Which brings up another question: how do we get a properly maintained
> > version of the sed-opal tool up ASAP? It's been a bit bitrotting
> > unfortunately, and the documentation and error handling hasn't been all
> > that great to
On Mon, Feb 04, 2019 at 10:07:09PM +0100, David Kozub wrote:
> It is eventually used for the second time in 11/16 block: sed-opal: ioctl
> for writing to shadow mbr.
>
> If you feel strongly about this I can exclude it from this commit and
> introduce it in 11/16 (where it then will called from he
On Tue, Feb 05, 2019 at 12:06:54AM +0100, David Kozub wrote:
> This will unfortunately trigger some changes (conflict resolving - e.g. if I
> move the last two patches in the current series forward, in front of the
> patches with new functionality). What is the proper procedure w.r.t.
> Reviewed-by
On Mon, Feb 04, 2019 at 10:31:39PM +0200, Boaz Harrosh wrote:
> On 01/02/19 09:55, Christoph Hellwig wrote:
> > The only real user of the T10 OSD protocol, the pNFS object layout
> > driver never went to the point of having shipping products, and we
> > removed it 1.5 years
On Tue, Feb 05, 2019 at 03:09:28PM +, John Garry wrote:
> For SCSI devices, unfortunately not all IO sent to the HW originates from
> blk-mq or any other single entity.
Where else would SCSI I/O originate from?
The following changes since commit ec51f8ee1e63498e9f521ec0e5a6d04622bb2c67:
aio: initialize kiocb private in case any filesystems expect it. (2019-02-06
08:04:22 -0700)
are available in the Git repository at:
git://git.infradead.org/nvme.git nvme-5.0
for you to fetch changes up to 5c959d7
On Fri, Feb 08, 2019 at 06:38:31PM -0500, Martin K. Petersen wrote:
> Some devices come online in write protected state and switch to
> read-write once they are ready to process I/O requests.
That is really weird. What kind of devices are these?
> Note that per-partition ro settings are lost on
Looks good:
Reviewed-by: Christoph Hellwig
arse. With the flags renumbered,
> we can more clearly see how many we have available.
>
> Signed-off-by: Jens Axboe
Looks good,
Reviewed-by: Christoph Hellwig
Btw, I think we should kill off QUEUE_FLAG_MQ_DEFAULT as well and open
code it in blk_mq_init_allocated_queue instead.
The following changes since commit aef1897cd36dcf5e296f1d2bae7e0d268561b685:
blk-mq: insert rq with DONTPREP to hctx dispatch list when requeue
(2019-02-11 19:51:52 -0700)
are available in the Git repository at:
git://git.infradead.org/nvme.git nvme-5.0
for you to fetch changes up to 4726b
I still don't understand why mp_bvec_last_segment isn't simply
called bvec_last_segment as there is no conflict. But I don't
want to hold this series up on that as there only are two users
left and we can always just fix it up later.
We shouldn't be allocating a scatterlist for a command that doesn't
have a payload.
The blk_rq_payload_bytes check in nvme_rdma_map_data is supposed to
prevent that.
Chaitanya, can you try to debug why this is not working? I'm on
vacation and don't have much time right now unfortunately.
On Thu, Feb 21, 2019 at 01:29:57AM +, Chaitanya Kulkarni wrote:
> Hi Martin,
>
> I don't mind going though that route, here are some points about
> benefits of not using REQ_SPECIAL_PAYLOAD for write-zeroes :-
>
> 1. We are using RQF_SPECIAL_PAYLOAD for only discard commands and not for
>
to sgl
Christoph Hellwig (13):
nvme_ioctl.h: remove duplicate GPL boilerplate
nvme-tcp.h: fix SPDX header
nvme-fabrics: convert to SPDX identifiers
nvme-fc: convert to SPDX identifiers
nvme-rdma: convert to SPDX identifiers
nvme-lightnvm: convert to SPDX
On Tue, Feb 12, 2019 at 09:57:17PM -0500, Martin K. Petersen wrote:
> Some devices come online in write protected state and switch to
> read-write once they are ready to process I/O requests. These devices
> broke with commit 20bd1d026aac ("scsi: sd: Keep disk read-only when
> re-reading partition"
t;>>>> On Mon, Feb 11, 2019 at 12:00:41PM -0700, Jens Axboe wrote:
> >>>>>>>>>>>>>>>>>> For an ITER_BVEC, we can just iterate the iov and add the
> >>>>>>>>>>>>>>>>>
On Thu, Feb 28, 2019 at 11:24:21AM +0800, Ming Lei wrote:
> bio_for_each_bvec is used in fast path of bio splitting and sg mapping,
> and what we want to do is to iterate over multi-page bvecs, instead of pages.
> However, bvec_iter_advance() is invisble for this requirement, and
> always advance b
On Wed, Feb 27, 2019 at 08:40:13PM +0800, Ming Lei wrote:
> mp_bvec_for_each_segment() is a bit big for the iteration, so introduce
> a light-weight helper for iterating over pages, then 32bytes stack
> space can be saved.
The version in Jens' tree seems to add this helper, but no actual
users..
multi-page bvecs.
This should help optimizing shaving off a few cycles of the I/O hot
path.
Signed-off-by: Christoph Hellwig
---
include/linux/bvec.h | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/include/linux/bvec.h b/include/linux/bvec.h
index 2c32e3e151a0
7;t
> >>>>>>>>>>> releases
> >>>>>>>>>>> the pages on IO completion, we add a BIO_NO_PAGE_REF flag for
> >>>>>>>>>>> that
On Tue, Feb 26, 2019 at 06:57:16PM -0700, Jens Axboe wrote:
> Speaking of this, I took a quick look at why we've now regressed a lot
> on IOPS perf with the multipage work. It looks like it's all related to
> the (much) fatter setup around iteration, which is related to this very
> topic too.
> Ba
On Wed, Feb 27, 2019 at 08:07:31AM +0100, Vlastimil Babka wrote:
> > I don't know _what_ Ming Lei is saying. I thought the problem was
> > with slab redzones, which need to be before and after each object,
> > but apparently the problem is with KASAN as well.
>
> That's what I thought as well. Bu
> +static inline bool
> +page_is_mergeable(const struct bio_vec *bv, struct page *page,
> + unsigned int len, unsigned int off, bool same_page)
Please follow the other function declarations in this file:
static inline bool page_is_mergeable(const struct bio_vec *bv,
st
> +int __bio_add_pc_page(struct request_queue *q, struct bio *bio, struct page
> + *page, unsigned int len, unsigned int offset,
> + bool put_same_page)
Very odd indentation, we try to never have a linebreak between the type
and its parameter name.
> +static in
> struct bio_vec *prev = &bio->bi_io_vec[bio->bi_vcnt - 1];
>
> + /* segment size is always >= PAGE_SIZE */
I don't think that actually is true. We have various drivers with 4k
segment size, for which this would not be true with a 64k page size
system.
>
> + bool new_bio;
>
> if (!bio)
> return 0;
> @@ -377,9 +377,10 @@ static unsigned int __blk_recalc_rq_segments(struct
> request_queue *q,
> fbio = bio;
> seg_size = 0;
> nr_phys_segs = 0;
> + new_bio = false;
I'd just initialize it to false in the
On Mon, Mar 11, 2019 at 08:54:59AM -0600, Keith Busch wrote:
> In QEMU, blk_aio_pwrite_zeroes() takes bytes, but the nvme controller
> thought it was blocks. Oops, that went by unnoticed till now!
>
> We should fix QEMU (patch below). Question is, should we quirk driver
> for older versions too?
>From a quick look the code seems reasonably sensible here,
but any chance we could have this in common code?
> +static bool nvme_fail_queue_request(struct request *req, void *data, bool
> reserved)
> +{
> + struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
> + struct nvme_queue *nvmeq = iod-
Hi Jens,
git.infradead.org seems to have some hickups at the moment, so
this bunch of nvme fixes will come in form of a patchbomb instead.
Most important is quirking Write Zeroes on qemu, as the implementation
there is buggy and could lead to ext4 data corruption. Except for
that we have various
From: Keith Busch
A write or flush IO passthrough command is expected to change the
logical block content, so don't warn on these as no additional handling
is necessary.
Signed-off-by: Keith Busch
Reviewed-by: Sagi Grimberg
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/core.
Thumshirn
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/trace.c | 14 ++
1 file changed, 14 insertions(+)
diff --git a/drivers/nvme/host/trace.c b/drivers/nvme/host/trace.c
index 58456de78bb2..5f24ea7a28eb 100644
--- a/drivers/nvme/host/trace.c
+++ b/drivers/nvme/host/trace.
From: Keith Busch
The field is defined to be a 24 byte array, we don't need to multiply
the sizeof() that field by the number of dwords it covers.
Signed-off-by: Keith Busch
Reviewed-by: Sagi Grimberg
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/trace.h | 2 +-
1 file chang
.
Fix the problem by simply rejecting the new association if at least 1
I/O queue can't be created. The association reject will fail the
reconnect attempt and fall into the reconnect retry policy.
Signed-off-by: James Smart
Reviewed-by: Sagi Grimberg
Signed-off-by: Christoph Hellwig
---
dr
y doing the put on a failing schedule_work() call.
Signed-off-by: Nigel Kirkland
Signed-off-by: James Smart
Reviewed-by: Ewan D. Milne
Signed-off-by: Christoph Hellwig
---
drivers/nvme/target/fc.c | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/drivers/nvme/target
From: Yufen Yu
After commit a686ed75c0fb ("nvme: introduce a helper function for
controller deletion), nvme_delete_ctrl_sync no longer use flush_work.
Update comment, accordingly.
Signed-off-by: Yufen Yu
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/core.c | 4 ++--
1 file ch
(), which may
more reasonable.
Signed-off-by: Yufen Yu
Reviewed-by: Sagi Grimberg
Reviewed-by: Bart Van Assche
Signed-off-by: Christoph Hellwig
---
drivers/nvme/target/core.c | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/nvme/target/core.c b
NVMe DSM is a pure hint, so if the underlying device / file system
does not support discard-like operations we should not fail the
operation but rather return success.
Fixes: 3b031d15995f ("nvmet: add error log support for bdev backend")
Signed-off-by: Christoph Hellwig
Reviewed by:
ually.
Signed-off-by: Sagi Grimberg
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/core.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index dc1641247b17..d57a84f45ed0 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/c
.
Fixes: 103e515efa89b ("nvme: add a numa_node field to struct nvme_ctrl")
Reported-by: Mike Snitzer
Signed-off-by: James Smart
Reviewed-by: Sagi Grimberg
Reviewed-by: Hannes Reinecke
Reviewed-by: Mike Snitzer
[hch: small coding style fixup]
Signed-off-by: Christoph Hellwig
---
drivers
making choices on whether to dma map an sgl,
use blk_rq_nr_phys_segments() instead of blk_rq_payload_bytes().
When there is a sgl, blk_rq_payload_bytes() will return the amount
of data to be transferred by the sgl.
Signed-off-by: Chaitanya Kulkarni
Signed-off-by: James Smart
Signed-off-by: Christoph
liver Smith-Denny
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/tcp.c | 32
1 file changed, 28 insertions(+), 4 deletions(-)
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 208ee518af65..e7e08889865e 100644
--- a/drivers/nvme/host/
Add a gendisk argument to nvme_config_discard so that the call to
nvme_update_disk_info for the multipath device node updates the
proper request_queue.
Signed-off-by: Christoph Hellwig
Reported-by: Sagi Grimberg
Reviewed-by: Keith Busch
Reviewed-by: Max Gurtovoy
Tested-by: Sagi Grimberg
Qemu started out with a broken implementation of Write Zeroes written
by yours truly. Disable Write Zeroes on qemu for now, eventually
we need to go back and make all the qemu quirks version specific,
but that is left for another time.
Signed-off-by: Christoph Hellwig
Reviewed-by: Keith Busch
Add a gendisk argument to nvme_config_write_zeroes so that the call to
nvme_update_disk_info for the multipath device node updates the
proper request_queue.
Signed-off-by: Christoph Hellwig
Reviewed-by: Keith Busch
Reviewed-by: Max Gurtovoy
Tested-by: Sagi Grimberg
---
drivers/nvme/host
Just opencode the two function calls in the caller.
Signed-off-by: Christoph Hellwig
Reviewed-by: Keith Busch
Reviewed-by: Max Gurtovoy
Reviewed-by: Chaitanya Kulkarni
Tested-by: Sagi Grimberg
---
drivers/nvme/host/core.c | 10 +++---
1 file changed, 3 insertions(+), 7 deletions
. Milne
Signed-off-by: Christoph Hellwig
---
drivers/nvme/target/fc.c | 33 ++---
1 file changed, 2 insertions(+), 31 deletions(-)
diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 8d34aa573d5b..7f051f9dfa8f 100644
--- a/drivers/nvme/target/fc.c
Return the currently active bvec segment, potentially spanning multiple
pages.
Signed-off-by: Christoph Hellwig
---
include/linux/blkdev.h | 7 +++
1 file changed, 7 insertions(+)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 0de92b29f589..255e20313cde 100644
--- a
Hi all,
with all the discussion on small I/O performance lately I thought
it was time off to dust my old idea to optimize this path a bit
by avoiding to build a scatterlist. I've only done very basic
testing because I've been a bit busy, but I thought it might be
worthwhile to get it out for feed
This means we now have a function that undoes everything nvme_map_data
does and we can simplify the error handling a bit.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 44 -
1 file changed, 17 insertions(+), 27 deletions(-)
diff --git a
In a lot of places we want to know the DMA direction for a given
struct request. Add a little helper to make it a littler easier.
Signed-off-by: Christoph Hellwig
---
include/linux/blkdev.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
Provide a nice little shortcut for mapping a single bvec.
Signed-off-by: Christoph Hellwig
---
include/linux/blkdev.h | 4
1 file changed, 4 insertions(+)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 5279104527ad..322ff969659c 100644
--- a/include/linux/blkdev.h
Cleaning up the command setup isn't related to unmapping data, and
disentangling them will simplify error handling a bit down the road.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pc
This mirrors how nvme_map_pci is called and will allow simplifying some
checks in nvme_unmap_pci later on.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index a90cf5d63aac..bf0d71fe243e 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -220,7 +220,6 @@ struct
This provides a nice little shortcut to get the integrity data for
drivers like NVMe that only support a single integrity segment.
Signed-off-by: Christoph Hellwig
---
include/linux/blkdev.h | 15 +++
1 file changed, 15 insertions(+)
diff --git a/include/linux/blkdev.h b/include
nvme_init_iod should really be split into two parts: initialize a few
general iod fields, which can easily be done at the beginning of
nvme_queue_rq, and allocating the scatterlist if needed, which logically
belongs into nvme_map_data with the code making use of it.
Signed-off-by: Christoph
We always have exactly one segment, so we can simply call dma_map_bvec.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 23 ++-
1 file changed, 10 insertions(+), 13 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index bc4ee869fe82
If a request is single segment and fits into one or two PRP entries we
do not have to create a scatterlist for it, but can just map the bio_vec
directly.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 45 -
1 file changed, 40 insertions
This prepares for some bigger changes to the data mapping helpers.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 48 +++--
1 file changed, 27 insertions(+), 21 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index
If the controller supports SGLs we can take another short cut for single
segment request, given that we can always map those without another
indirection structure, and thus don't need to create a scatterlist
structure.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c
We'll have a better way to optimize for small I/O that doesn't
require it soon, so remove the existing inline_sg case to make that
optimization easier to implement.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 38 ++
1 file
Remove two pointless local variables, remove ret assignment that is
never used, move the use_sgl initialization closer to where it is used.
Signed-off-by: Christoph Hellwig
---
drivers/nvme/host/pci.c | 17 +
1 file changed, 5 insertions(+), 12 deletions(-)
diff --git a/drivers
0;
bio->bi_phys_segments++;
Otherwise this looks fine:
Reviewed-by: Christoph Hellwig
Looks fine,
Reviewed-by: Christoph Hellwig
Looks fine,
Reviewed-by: Christoph Hellwig
But how do you manage to get the tiny on-stack bios split? What kind
of setup is this?
events will be asymmetrical,
> just have unplug and without plug.
>
> This patch add trace plug and unplug for multiple queues device in
> blk_mq_make_request(). After that, we can accurately trace plug and
> unplug for multiple queues.
>
> Signed-off-by: Yufen Yu
Looks good,
Reviewed-by: Christoph Hellwig
On Thu, Mar 21, 2019 at 02:40:51PM -0700, Sagi Grimberg wrote:
>> nvme_rdma_teardown_io_queues:
>> nvme_stop_queues(&ctrl->ctrl);
>> nvme_rdma_stop_io_queues(ctrl);
>> blk_mq_tagset_busy_iter(&ctrl->tag_set, nvme_cancel_request,
>> &ctrl->ctrl);
>
Please make this an EXPORT_SYMBOL_GPL as for all new low-level blk-mq
exports. Otherwise looks fine modolo any minor documentatio nitpicks
that I might have seen in the thread:
Reviewed-by: Christoph Hellwig
Looks good,
Reviewed-by: Christoph Hellwig
On Mon, Mar 11, 2019 at 01:37:53PM -0600, Keith Busch wrote:
> > The only thing not purely block layer here is the enabled flag.
> > So if we had a per-hctx enabled flag we could lift this out of nvme,
> > and hopefully start reusing it in other drivers.
>
> Okay, I may even be able to drop the ne
On Mon, Mar 25, 2019 at 05:07:44AM +, Chaitanya Kulkarni wrote:
> > +static inline struct bio_vec req_bvec(struct request *rq)
> > +{
> > + if (rq->rq_flags & RQF_SPECIAL_PAYLOAD)
> > + return rq->special_vec;
> Quick question here mostly for my understanding, do we also have to
>
On Mon, Mar 25, 2019 at 05:10:20AM +, Chaitanya Kulkarni wrote:
> > +/*
> > + * Return the first bvec that contains integrity data. In general only
> > + * drivers that are limited to a single integrity segment should use this
> > + * helper.
> > + */
> > +static inline struct bio_vec *rq_inte
On Fri, Mar 22, 2019 at 02:06:43PM +0100, Johannes Thumshirn wrote:
> On 22/03/2019 00:10, Christoph Hellwig wrote:
> > In a lot of places we want to know the DMA direction for a given
> > struct request. Add a little helper to make it a littler easier.
>
> You introuduc
On Mon, Mar 25, 2019 at 05:19:34AM +, Chaitanya Kulkarni wrote:
> > @@ -913,9 +902,14 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx
> > *hctx,
> > struct nvme_queue *nvmeq = hctx->driver_data;
> > struct nvme_dev *dev = nvmeq->dev;
> > struct request *req = bd->rq;
> >
FYI, I've pulled the series including the block helpers into nvme-5.2,
with a few tiny changes for the review comments from Chaitanya.
501 - 600 of 3789 matches
Mail list logo