Used forsynchronous requests that needs polling. If we are knowingly
sending a request down to a poll queue, we need a synchronous interface
to poll for its completion.
Signed-off-by: Sagi Grimberg
---
block/blk-exec.c | 29 +
block/blk-mq.c | 8
Reviewed-by: Sagi Grimberg
Well, it should just be a little blk_poll loop, right?
Not so much about the poll loop, but the fact that we will need
to check if we need to poll for this special case every time in
.queue_rq and its somewhat annoying...
I don't think we need to add any special check in queue_rq. Any
poll
Hey Sagi,
Hi Steve,
Is there no way to handle this in the core? Maybe have the polling context
transition to DIRECT when the queue becomes empty and before re-arming the
CQ?
That is what I suggested, but that would mean that we we need to drain
the cq before making the switch, which mean
Sagi,
What other wins are there for this split ?
I'm considering whether its worthwhile for fc as well, but the hol issue
doesn't exist with fc. What else is being resolved ?
I've pondered it myself, which is why I didn't add it to fc as well
(would have been easy enough I think). I guess
I found it cumbersome so I didn't really consider it...
Isn't it a bit awkward? we will need to implement polled connect
locally in nvme-rdma (because fabrics doesn't know anything about
queues, hctx or polling).
Well, it should just be a little blk_poll loop, right?
Not so much about the p
So we need to make sure in the block layer or I/O submitter that
REQ_HIPRI is only set if QUEUE_FLAG_POLL is supported. I think it would
also help if we rename it to REQ_POLL to make this more obvious.
It used to check for it, but was changed to look at nr_maps instead...
So I think this is
[adding Jens]
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1547,6 +1547,8 @@ static void __nvme_revalidate_disk(struct gendisk *disk,
struct nvme_id_ns *id)
if (ns->head->disk) {
nvme_update_disk_info(ns->head->disk, ns, id);
blk_q
Add an additional queue mapping for polling queues that will
host polling for latency critical I/O.
One caveat is that we don't want these queues to be pure polling
as we don't want to bother with polling for the initial nvmf connect
I/O. Hence, introduce ib_change_cq_ctx that will modify the
- queue->io_cpu = (qid == 0) ? 0 : qid - 1;
+ n = (qid ? qid - 1 : 0) % num_online_cpus();
+ queue->io_cpu = cpumask_next_wrap(n - 1, cpu_online_mask, -1, false);
Nitpick: can you use an if/else here? The ? : construct just looks
like obsfucation.
If nothing else pops up I
+ {"nr-poll-queues", 'W', "LIST", CFG_INT, &cfg.nr_poll_queues,
required_argument, "number of poll queues to use (default 0)" },
Oops, this should be 'P'
nr_poll_queues specifies the number of additional queues that will
be connected for hosting polling latency critical I/O.
Signed-off-by: Sagi Grimberg
---
Documentation/nvme-connect.txt | 5 +
fabrics.c | 11 +++
2 files changed, 16 insertions(+)
diff --git a
Since the multipath device does not support polling (yet) we cannot
pass requests to the polling queue map as those will not generate
interrupt so we cannot reap the completion.
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/core.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a
Finally, we turn off polling support for nvme-multipath as it won't invoke
polling and our completion queues no longer generates any interrupts for
it. I didn't come up with a good way to get around it so far...
Sagi Grimberg (4):
nvme-fabrics: allow user to pass in nr_poll_queues
rd
IB_POLL_SOFTIRQ
and switch it to IB_POLL_DIRECT where it makes sense.
Signed-off-by: Sagi Grimberg
---
drivers/infiniband/core/cq.c | 102 ---
include/rdma/ib_verbs.h | 1 +
2 files changed, 71 insertions(+), 32 deletions(-)
diff --git a/drivers/infiniband
This argument will specify how many polling I/O queues
to connect when creating the controller. These I/O queues
will host I/O that is set with REQ_HIPRI.
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/fabrics.c | 16 +++-
drivers/nvme/host/fabrics.h | 3 +++
2 files changed
Every nvmf queue starts with a connect message that is the slow path
at setup time, and there is no need for polling (it is actually
hurtful). Instead, allocate the polling queue cq with IB_POLL_SOFTIRQ
and switch it to IB_POLL_DIRECT where it makes sense.
Signed-off-by: Sagi Grimberg
nr_write_queues specifies the number of additional queues that
will be connected. These queues will host write I/O (host to target
payload) while nr_io_queues will host read I/O (target to host payload).
Signed-off-by: Sagi Grimberg
---
Documentation/nvme-connect.txt | 5 +
fabrics.c
change logs
- collected review tags
- added nr-write-queues entry in nvme-cli docuementation
Sagi Grimberg (5):
blk-mq-rdma: pass in queue map to blk_mq_rdma_map_queues
nvme-fabrics: add missing nvmf_ctrl_options documentation
nvme-fabrics: allow user to set nr_write_queues for separate queue
queue map.
Reviewed-by: Christoph Hellwig
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/fabrics.c | 15 ++-
drivers/nvme/host/fabrics.h | 3 +++
2 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index
nr_write_queues.
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/tcp.c | 44 +++--
1 file changed, 38 insertions(+), 6 deletions(-)
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 15543358e245..61eeed758f4b 100644
--- a/drivers/nvme/host/tcp.c
nr_write_queues.
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/rdma.c | 28 +---
1 file changed, 25 insertions(+), 3 deletions(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 5057d5ab5aaa..6a7c546b4e74 100644
--- a/drivers/nvme/host/rdma.c
+++ b
will be used by nvme-rdma for queue map separation support.
Reviewed-by: Christoph Hellwig
Signed-off-by: Sagi Grimberg
---
block/blk-mq-rdma.c | 8
drivers/nvme/host/rdma.c| 2 +-
include/linux/blk-mq-rdma.h | 2 +-
3 files changed, 6 insertions(+), 6 deletions(-)
diff
Reviewed-by: Christoph Hellwig
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/fabrics.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index 524a02a67817..28dc916ef26b 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers
On Tue, Dec 11, 2018 at 02:49:36AM -0800, Sagi Grimberg wrote:
if (cfg.host_traddr) {
len = sprintf(p, ",host_traddr=%s", cfg.host_traddr);
if (len < 0)
@@ -1009,6 +1019,7 @@ int connect(const char *desc, int argc, char **argv)
+static int nvme_tcp_map_queues(struct blk_mq_tag_set *set)
+{
+ struct nvme_tcp_ctrl *ctrl = set->driver_data;
+ struct blk_mq_queue_map *map;
+
+ if (ctrl->ctrl.opts->nr_write_queues) {
+ /* separate read/write queues */
+ map = &set->map[HCTX_TYP
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/tcp.c | 53 -
1 file changed, 47 insertions(+), 6 deletions(-)
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 15543358e245..5c0ba99fb105 100644
--- a/drivers/nvme/host/tcp.c
+++ b
Reviewed-by: Sagi Grimberg
queue map.
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/fabrics.c | 15 ++-
drivers/nvme/host/fabrics.h | 3 +++
2 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 9c62c6838b76..066c3a02e08b 100644
--- a
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/rdma.c | 39 ---
1 file changed, 36 insertions(+), 3 deletions(-)
diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
index 5057d5ab5aaa..cfe823a491f2 100644
--- a/drivers/nvme/host/rdma.c
+++ b
Signed-off-by: Sagi Grimberg
---
block/blk-mq-rdma.c | 8
drivers/nvme/host/rdma.c| 2 +-
include/linux/blk-mq-rdma.h | 2 +-
3 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/block/blk-mq-rdma.c b/block/blk-mq-rdma.c
index a71576aff3a5..45030a81a1ed 100644
nr_write_queues specifies the number of queues additional to nr_io_queues
will be connected. These queues will host write I/O (host to target
payload) while nr_io_queues will host read I/O (target to host payload).
Signed-off-by: Sagi Grimberg
---
fabrics.c | 11 +++
1 file changed, 11
trivial behind the
cq API we use.
Note that read/write separation for rdma but especially tcp this can be
very clear win as we minimize the risk for head-of-queue blocking for
mixed workloads over a single tcp byte stream.
Sagi Grimberg (5):
blk-mq-rdma: pass in queue map to blk_mq_rdma_map_queues
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/fabrics.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index 524a02a67817..28dc916ef26b 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -88,6 +88,9
Reviewed-by: Sagi Grimberg
Yes, I'm very much in favour of this, too.
We always have this IMO slightly weird notion of stopping the queue, set
some error flags in the driver, then _restarting_ the queue, just so
that the driver then sees the error flag and terminates the requests.
Which I always found quite counter-intui
Yes, I'm very much in favour of this, too.
We always have this IMO slightly weird notion of stopping the queue,
set
some error flags in the driver, then _restarting_ the queue, just so
that the driver then sees the error flag and terminates the requests.
Which I always found quite counter-int
If it really becomes an issue we
should rework the nvme code to also skip the multipath code for any
private namespace, even if that could mean some trouble when rescanning.
This requires some explanation? skip the multipath code how?
We currently always go through the multipath node as lo
+static int nvme_poll_irqdisable(struct nvme_queue *nvmeq, unsigned int tag)
Do we still need to carry the tag around?
Yes, the timeout handler polls for a specific tag.
Does it have to? the documentation suggests that we missed
an interrupt, so it is probably waiting on the completion qu
@@ -2173,6 +2157,8 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
if (nr_io_queues == 0)
return 0;
+
+ clear_bit(NVMEQ_ENABLED, &adminq->flags);
This is a change of behavior, looks correct though as we can fail
nvme_setup_irqs after we freed
Nit, there seems to be an extra newline that can be omitted here before
the else if statement (if I'm reading this correctly)...
Empty lines can always be ommited, but in this case I actually like it
as it seems to help readability..
If you think its useful I'm fine with it as is...
From: Sagi Grimberg
skb_copy_datagram_iter and skb_copy_and_csum_datagram are essentialy
the same but with a couple of differences: The first is the copy
operation used which either a simple copy or a csum_and_copy, and the
second are the behavior on the "short copy" path where simply
From: Sagi Grimberg
Introduce a helper to copy datagram into an iovec iterator
but also update a predefined hash. This is useful for
consumers of skb_copy_datagram_iter to also support inflight
data digest without having to finish to copy and only then
traverse the iovec and calculate the digest
From: Sagi Grimberg
Allow consumers that want to use iov iterator helpers and also update
a predefined hash calculation online when copying data. This is useful
when copying incoming network buffers to a local iterator and calculate
a digest on the incoming stream. nvme-tcp host driver that will
le patches
Sagi Grimberg (13):
ath6kl: add ath6kl_ prefix to crypto_type
datagram: open-code copy_page_to_iter
iov_iter: pass void csum pointer to csum_and_copy_to_iter
datagram: consolidate datagram copy to iter helpers
iov_iter: introduce hash_and_copy_to_iter helper
datagram:
From: Sagi Grimberg
The single caller to csum_and_copy_to_iter is skb_copy_and_csum_datagram
and we are trying to unite its logic with skb_copy_datagram_iter by passing
a callback to the copy function that we want to apply. Thus, we need
to make the checksum pointer private to the function
From: Sagi Grimberg
Prevent a namespace conflict as in following patches as skbuff.h will
include the crypto API.
Acked-by: David S. Miller
Cc: Kalle Valo
Signed-off-by: Sagi Grimberg
---
drivers/net/wireless/ath/ath6kl/cfg80211.c | 2 +-
drivers/net/wireless/ath/ath6kl/common.h | 2
From: Sagi Grimberg
nvmet-tcp will implement it to allocate queue commands which
are only known at nvmf connect time (sq size).
Reviewed-by: Christoph Hellwig
Signed-off-by: Sagi Grimberg
---
drivers/nvme/target/fabrics-cmd.c | 10 ++
drivers/nvme/target/nvmet.h | 1 +
2 files
From: Sagi Grimberg
Header digest is a nvme-tcp specific feature, but nothing prevents other
transports reusing the concept so do not associate with tcp transport
solely.
Reviewed-by: Christoph Hellwig
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/fabrics.c | 5 +
drivers/nvme/host
From: Sagi Grimberg
Signed-off-by: Sagi Grimberg
---
include/linux/nvme-tcp.h | 189 +++
include/linux/nvme.h | 1 +
2 files changed, 190 insertions(+)
create mode 100644 include/linux/nvme-tcp.h
diff --git a/include/linux/nvme-tcp.h b/include/linux
From: Sagi Grimberg
This patch implements the NVMe over TCP host driver. It can be used to
connect to remote NVMe over Fabrics subsystems over good old TCP/IP.
The driver implements the TP 8000 of how nvme over fabrics capsules and
data are encapsulated in nvme-tcp pdus and exchaged on top of a
From: Sagi Grimberg
Data digest is a nvme-tcp specific feature, but nothing prevents other
transports reusing the concept so do not associate with tcp transport
solely.
Reviewed-by: Christoph Hellwig
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/fabrics.c | 5 +
drivers/nvme/host
From: Sagi Grimberg
Reviewed-by: Max Gurtovoy
Reviewed-by: Christoph Hellwig
Signed-off-by: Sagi Grimberg
---
drivers/nvme/target/configfs.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index db2cb64be7ba..618bbd006544
From: Sagi Grimberg
This patch implements the TCP transport driver for the NVMe over Fabrics
target stack. This allows exporting NVMe over Fabrics functionality over
good old TCP/IP.
The driver implements the TP 8000 of how nvme over fabrics capsules and
data are encapsulated in nvme-tcp pdus
From: Sagi Grimberg
This will be useful to consolidate skb_copy_and_hash_datagram_iter and
skb_copy_and_csum_datagram to a single code path.
Acked-by: David S. Miller
Signed-off-by: Sagi Grimberg
---
net/core/datagram.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff
Reviewed-by: Sagi Grimberg
Reviewed-by: Sagi Grimberg
Reviewed-by: Sagi Grimberg
Reviewed-by: Sagi Grimberg
Reviewed-by: Sagi Grimberg
Reviewed-by: Sagi Grimberg
A driver may wish to iterate every tagged request, not just ones that
satisfy blk_mq_request_started(). The intended use is so a driver may
terminate entered requests on quiesced queues.
How about we just move the started check into the handler passed in for
those that care about it? Much san
Reviewed-by: Sagi Grimberg
Reviewed-by: Sagi Grimberg
Reviewed-by: Sagi Grimberg
If it really becomes an issue we
should rework the nvme code to also skip the multipath code for any
private namespace, even if that could mean some trouble when rescanning.
This requires some explanation? skip the multipath code how?
Other than that,
Reviewed-by: Sagi Grimberg
er of disabling interrupts.
With that we can stop taking the cq_lock for normal queues.
Nice,
Reviewed-by: Sagi Grimberg
Reviewed-by: Sagi Grimberg
@@ -2428,7 +2426,8 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool
shutdown)
nvme_stop_queues(&dev->ctrl);
if (!dead && dev->ctrl.queue_count > 0) {
- nvme_disable_io_queues(dev);
+ if (nvme_disable_io_queues(dev, nvme_admin_delete_sq))
+
+static int nvme_poll_irqdisable(struct nvme_queue *nvmeq, unsigned int tag)
Do we still need to carry the tag around?
Other than that,
Reviewed-by: Sagi Grimberg
Reviewed-by: Sagi Grimberg
Looks good,
Reviewed-by: Sagi Grimberg
@@ -2173,6 +2157,8 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
if (nr_io_queues == 0)
return 0;
+
+ clear_bit(NVMEQ_ENABLED, &adminq->flags);
This is a change of behavior, looks correct though as we can fail
nvme_setup_irqs after we freed th
) == REQ_OP_READ))
+ type = HCTX_TYPE_READ;
Nit, there seems to be an extra newline that can be omitted here before
the else if statement (if I'm reading this correctly)...
Otherwise looks good,
Reviewed-by: Sagi Grimberg
What is the plan ahead here? I think the nvme code looks pretty
reasonable now (I'll do another pass at nitpicking), but we need the
networking stuff sorted out with at least ACKs, or a merge through
the networking tree and then a shared branch we can pull in.
I would think that having Dave
What is the plan ahead here? I think the nvme code looks pretty
reasonable now (I'll do another pass at nitpicking), but we need the
networking stuff sorted out with at least ACKs, or a merge through
the networking tree and then a shared branch we can pull in.
I would think that having Dave
+static inline void nvmet_tcp_put_cmd(struct nvmet_tcp_cmd *cmd)
+{
+ if (unlikely(cmd == &cmd->queue->connect))
+ return;
if you don't return connect cmd to the list please don't add it to it in
the first place (during alloc_cmd). and if you use it once, we might
think of a clean
Reviewed-by: Sagi Grimberg
Reviewed-by: Sagi Grimberg
From: Sagi Grimberg
skb_copy_datagram_iter and skb_copy_and_csum_datagram are essentialy
the same but with a couple of differences: The first is the copy
operation used which either a simple copy or a csum_and_copy, and the
second are the behavior on the "short copy" path where simply
From: Sagi Grimberg
Signed-off-by: Sagi Grimberg
---
include/linux/nvme-tcp.h | 189 +++
include/linux/nvme.h | 1 +
2 files changed, 190 insertions(+)
create mode 100644 include/linux/nvme-tcp.h
diff --git a/include/linux/nvme-tcp.h b/include/linux
From: Sagi Grimberg
Prevent a namespace conflict as in following patches as skbuff.h will
include the crypto API.
Cc: Kalle Valo
Signed-off-by: Sagi Grimberg
---
drivers/net/wireless/ath/ath6kl/cfg80211.c | 2 +-
drivers/net/wireless/ath/ath6kl/common.h | 2 +-
drivers/net/wireless/ath
From: Sagi Grimberg
nvmet-tcp will implement it to allocate queue commands which
are only known at nvmf connect time (sq size).
Reviewed-by: Christoph Hellwig
Signed-off-by: Sagi Grimberg
---
drivers/nvme/target/fabrics-cmd.c | 10 ++
drivers/nvme/target/nvmet.h | 1 +
2 files
From: Sagi Grimberg
Allow consumers that want to use iov iterator helpers and also update
a predefined hash calculation online when copying data. This is useful
when copying incoming network buffers to a local iterator and calculate
a digest on the incoming stream. nvme-tcp host driver that will
From: Sagi Grimberg
Data digest is a nvme-tcp specific feature, but nothing prevents other
transports reusing the concept so do not associate with tcp transport
solely.
Reviewed-by: Christoph Hellwig
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/fabrics.c | 5 +
drivers/nvme/host
From: Sagi Grimberg
This will be useful to consolidate skb_copy_and_hash_datagram_iter and
skb_copy_and_csum_datagram to a single code path.
Signed-off-by: Sagi Grimberg
---
net/core/datagram.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/net/core/datagram.c b
From: Sagi Grimberg
Reviewed-by: Max Gurtovoy
Reviewed-by: Christoph Hellwig
Signed-off-by: Sagi Grimberg
---
drivers/nvme/target/configfs.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
index db2cb64be7ba..618bbd006544
From: Sagi Grimberg
Header digest is a nvme-tcp specific feature, but nothing prevents other
transports reusing the concept so do not associate with tcp transport
solely.
Reviewed-by: Christoph Hellwig
Signed-off-by: Sagi Grimberg
---
drivers/nvme/host/fabrics.c | 5 +
drivers/nvme/host
From: Sagi Grimberg
This patch implements the NVMe over TCP host driver. It can be used to
connect to remote NVMe over Fabrics subsystems over good old TCP/IP.
The driver implements the TP 8000 of how nvme over fabrics capsules and
data are encapsulated in nvme-tcp pdus and exchaged on top of a
nks to the members of the Fabrics Linux Driver team that helped
development, testing and benchmarking this work.
Gitweb code is available at:
git://git.infradead.org/nvme.git nvme-tcp
Sagi Grimberg (13):
ath6kl: add ath6kl_ prefix to crypto_type
datagram: open-code copy_page_to_iter
iov_
From: Sagi Grimberg
The single caller to csum_and_copy_to_iter is skb_copy_and_csum_datagram
and we are trying to unite its logic with skb_copy_datagram_iter by passing
a callback to the copy function that we want to apply. Thus, we need
to make the checksum pointer private to the function
From: Sagi Grimberg
Introduce a helper to copy datagram into an iovec iterator
but also update a predefined hash. This is useful for
consumers of skb_copy_datagram_iter to also support inflight
data digest without having to finish to copy and only then
traverse the iovec and calculate the digest
From: Sagi Grimberg
This patch implements the TCP transport driver for the NVMe over Fabrics
target stack. This allows exporting NVMe over Fabrics functionality over
good old TCP/IP.
The driver implements the TP 8000 of how nvme over fabrics capsules and
data are encapsulated in nvme-tcp pdus
We currently only really support sync poll, ie poll with 1
IO in flight. This prepares us for supporting async poll.
Hey Jens,
So are we sure that this is fine to simply replace the
poll functionality? you say that that we support poll
with only 1 I/O inflight but is it entirely true?
semant
This looks odd. It's not really the timeout handlers job to
call nvme_end_request here.
Well.. if we are not yet LIVE, we will not trigger error
recovery, which means nothing will complete this command so
something needs to do it...
I think that we need it for rdma too..
yes we do. and w
The cli parts good to me. Is there any forseeable issue if I apply these
ahead of the kernel integration? I don't see any, but just want to
confirm since its all in one series.
I don't see an issue, note that its on top of the sqflow patch [1]
that I've been meaning to ping you on.
[PATCH nvme-
We currently only really support sync poll, ie poll with 1
IO in flight. This prepares us for supporting async poll.
Hey Jens,
So are we sure that this is fine to simply replace the
poll functionality? you say that that we support poll
with only 1 I/O inflight but is it entirely true?
semantica
+ if (!cmd || queue->state == NVMET_TCP_Q_DISCONNECTING) {
+ cmd = nvmet_tcp_fetch_send_command(queue);
+ if (unlikely(!cmd))
+ return 0;
+ }
+
+ if (cmd->state == NVMET_TCP_SEND_DATA_PDU) {
+ ret = nvmet_try_send
+static enum nvme_tcp_recv_state nvme_tcp_recv_state(struct nvme_tcp_queue
*queue)
+{
+ return (queue->pdu_remaining) ? NVME_TCP_RECV_PDU :
+ (queue->ddgst_remaining) ? NVME_TCP_RECV_DDGST :
+ NVME_TCP_RECV_DATA;
+}
This just seems to be used in a single sw
From: Sagi Grimberg
skb_copy_datagram_iter and skb_copy_and_csum_datagram are essentialy
the same but with a couple of differences: The first is the copy
operation used which either a simple copy or a csum_and_copy, and the
second are the behavior on the "short copy" path where simply
101 - 200 of 738 matches
Mail list logo