If for some reason we failed to query the mr status, we need to make sure
to provide sufficient information for an ambiguous error (guard error on
sector 0).
Fixes: 0a7a08ad6f5f ("IB/iser: Implement check_protection")
Cc:
Reported-by: Dan Carpenter
Signed-off-by: Sagi Grimberg
--
Acked-by: Sagi Grimberg <s...@grimberg.me>
Thanks very much for reply.
I think it may be better to assert failed to exit problem than run
the endless loop after receivered DEVICE_REMOVAL event.
Or we can sleep 5 ms to check if the conn->h.state is STATE_FULL.
Note that neither of the CC'd lists are the correct list
for stgt, you
On 11/07/17 10:51, 李春 wrote:
We have meet a problem of tgtd CPU 100%.
the infinband network card was negotiate as eth mode by mistake,
after we change it to ib mode and restart opensmd for correct State(Active)
the tgtd using 100% of CPU. and when we connect to it using tgtadm,
tgtadm hang
The patches are dependent on the FC nvme/nvmet patches from the following 2
series:
http://lists.infradead.org/pipermail/linux-nvme/2017-April/009250.html
http://lists.infradead.org/pipermail/linux-nvme/2017-April/009256.html
Hmm,
So it seems that we have conflicts here
A local merge
Sagi
As long as legA, legB and the RC are all connected to the same switch then
ordering will be preserved (I think many other topologies also work). Here is
how it would work for the problem case you are concerned about (which is a read
from the NVMe drive).
1. Disk device DMAs out the
I hadn't done this yet but I think a simple closest device in the tree
would solve the issue sufficiently. However, I originally had it so the
user has to pick the device and I prefer that approach. But if the user
picks the device, then why bother restricting what he picks?
Because the user
Note that the nvme completion queues are still on the host memory, so
this means we have lost the ordering between data and completions as
they go to different pcie targets.
Hmm, in this simple up/down case with a switch, I think it might
actually be OK.
Transactions might not complete at
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
u16 nvmet_copy_to_sgl(struct nvmet_req *req, off_t off, const void *buf,
size_t len)
{
- if (sg_pcopy_from_buffer(req->sg, req->sg_cnt, buf, len, off) != len)
+ bool iomem = req->p2pmem;
+ size_t ret;
+
+ ret = sg_copy_buffer(req->sg, req->sg_cnt, (void
+ p2pmem_debugfs_root = debugfs_create_dir("p2pmem", NULL);
+ if (!p2pmem_debugfs_root)
+ pr_info("could not create debugfs entry, continuing\n");
+
Why continue? I think it'd be better to just fail it.
Besides, this can be safely squashed into patch 1.
+static void setup_memwin_p2pmem(struct adapter *adap)
+{
+ unsigned int mem_base = t4_read_reg(adap, CIM_EXTMEM2_BASE_ADDR_A);
+ unsigned int mem_size = t4_read_reg(adap, CIM_EXTMEM2_ADDR_SIZE_A);
+
+ if (!use_p2pmem)
+ return;
This is weird, why even call
Hey Logan,
We create a configfs attribute in each nvme-fabrics target port to
enable p2p memory use. When enabled, the port will only then use the
p2p memory if a p2p memory device can be found which is behind the
same switch as the RDMA port and all the block devices in use. If
the user
add missing Kconfig NVME dependencies
Can't believe I missed posting this
-- james
Heh, the this sort of comment should come after
the '---' separator (below) unless you want it to live forever
in the git log...
Signed-off-by: James Smart
---
[here]
I'm fine with the path selectors getting moved out; maybe it'll
encourage new path selectors to be developed.
But there will need to be some userspace interface stood up to support
your native NVMe multipathing (you may not think it needed but think in
time there will be a need to configure
Hi James,
This patch set adds support for NVME over Fabrics FC transport
to lpfc
The internals of the driver are reworked to support being either:
a SCSI initiator;
a NVME intiator;
both a SCSI initiator and a NVME initiator;
or a NVME target.
The driver effectively has parallel NVME and
Hey Chris and Guilherme,
I'm indeed not responsive under this email address.
Thanks for the testing, looks like you have the magic target to reproduce this.
I think this verifies what Mike's idea of what was going wrong, and we're way
overdue to get this fixed upstream. Thanks to IBM for
Christoph suggest to me once that we can take a hybrid
approach where we consume a small amount of completions (say 4)
right away from the interrupt handler and if we have more
we schedule irq-poll to reap the rest. But back then it
didn't work better which is not aligned with my observations
I think you missed:
http://git.infradead.org/nvme.git/commit/49c91e3e09dc3c9dd1718df85112a8cce3ab7007
I indeed did, thanks.
But it doesn't help.
We're still having to wait for the first interrupt, and if we're really
fast that's the only completion we have to process.
Try this:
diff
@@ -1014,9 +1013,9 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue
*queue,
}
Christoph, a little above here we still look at blk_rq_bytes(),
shouldn't that look at blk_rq_payload_bytes() too?
The check is ok for now as it's just zero vs non-zero. It's somewhat
broken for
Hannes just spotted this:
static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
const struct blk_mq_queue_data *bd)
{
[...]
__nvme_submit_cmd(nvmeq, );
nvme_process_cq(nvmeq);
spin_unlock_irq(>q_lock);
return BLK_MQ_RQ_QUEUE_OK;
Your report provided this stats with one-completion dominance for the
single-threaded case. Does it also hold if you run multiple fio
threads per core?
It's useless to run more threads on that core, it's already fully
utilized. That single threads is already posting a fair amount of
So what you say is you saw a consomed == 1 [1] most of the time?
[1] from
http://git.infradead.org/nvme.git/commitdiff/eed5a9d925c59e43980047059fde29e3aa0b7836
Exactly. By processing 1 completion per interrupt it makes perfect sense
why this performs poorly, it's not worth paying the
@@ -1014,9 +1013,9 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue
*queue,
}
Christoph, a little above here we still look at blk_rq_bytes(),
shouldn't that look at blk_rq_payload_bytes() too?
if (count == 1) {
- if (rq_data_dir(rq) == WRITE &&
-
So it looks like we are super not efficient because most of the
times we catch 1
completion per interrupt and the whole point is that we need to find
more! This fio
is single threaded with QD=32 so I'd expect that we be somewhere in
8-31 almost all
the time... I also
Just for the record, all tests you've run are with the upper irq_poll_budget of
256 [1]?
Yes, but that's the point, I never ever reach this budget because
I'm only processing 1-2 completions per interrupt.
We (Hannes and me) recently stumbed accross this when trying to poll for more
than
Oh, and the current code that was tested can be found at:
git://git.infradead.org/nvme.git nvme-irqpoll
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
Hey, so I made some initial analysis of whats going on with
irq-poll.
First, I sampled how much time it takes before we
get the interrupt in nvme_irq and the initial visit
to nvme_irqpoll_handler. I ran a single threaded fio
with QD=32 of 4K reads. This is two displays of a
histogram of the
--
[1]
queue = b'nvme0q1'
usecs : count distribution
0 -> 1 : 7310 ||
2 -> 3 : 11 | |
4 -> 7 : 10 | |
8 -> 15 : 20 | |
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
This looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Add a helper to calculate the actual data transfer size for special
payload requests.
Signed-off-by: Christoph Hellwig
---
include/linux/blkdev.h | 13 +
1 file changed, 13 insertions(+)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index
**Note: when I ran multiple threads on more cpus the performance
degradation phenomenon disappeared, but I tested on a VM with
qemu emulation backed by null_blk so I figured I had some other
bottleneck somewhere (that's why I asked for some more testing).
That could be because of the vmexits
I agree with Jens that we'll need some analysis if we want the
discussion to be affective, and I can spend some time this if I
can find volunteers with high-end nvme devices (I only have access
to client nvme devices.
I have a P3700 but somehow burned the FW. Let me see if I can bring it back
Hi Folks,
I would like to propose a general discussion on Storage stack and device driver
testing.
I think its very useful and needed.
Purpose:-
-
The main objective of this discussion is to address the need for
a Unified Test Automation Framework which can be used by
A typical Ethernet network adapter delays the generation of an
interrupt
after it has received a packet. A typical block device or HBA does not
delay
the generation of an interrupt that reports an I/O completion.
>>>
>>> NVMe allows for configurable interrupt coalescing,
I'd like to attend LSF/MM and would like to discuss polling for block
drivers.
Currently there is blk-iopoll but it is neither as widely used as NAPI in
the networking field and accoring to Sagi's findings in [1] performance
with polling is not on par with IRQ usage.
On LSF/MM I'd like to
Hi all,
I'd like to attend LSF/MM and would like to discuss polling for block drivers.
Currently there is blk-iopoll but it is neither as widely used as NAPI in the
networking field and accoring to Sagi's findings in [1] performance with
polling is not on par with IRQ usage.
On LSF/MM I'd
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
and again,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Again,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Looks useful,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Hey Bart,
The solution I prefer is to modify the SCSI scanning code such that
the scan_mutex is only held while performing the actual LUN scanning
and while ensuring that no SCSI device has been created yet for a
certain LUN number but not while the Linux device and its sysfs
attributes are
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Robert,
Hey Robert, Christoph,
please explain your use cases that isn't handled. The one and only
reason to set MSDBD to 1 is to make the code a lot simpler given that
there is no real use case for supporting more.
RDMA uses memory registrations to register large and possibly
Looks fine,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Thanks for moving it,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Hello Ming,
Can you have a look at the attached patch? That patch uses an srcu read
lock for all queue types, whether or not the BLK_MQ_F_BLOCKING flag has
been set. Additionally, I have dropped the QUEUE_FLAG_QUIESCING flag.
Just like previous versions, this patch has been tested.
Hey Bart,
Hello Ming,
Can you have a look at the attached patch? That patch uses an srcu read
lock for all queue types, whether or not the BLK_MQ_F_BLOCKING flag has
been set. Additionally, I have dropped the QUEUE_FLAG_QUIESCING flag.
Just like previous versions, this patch has been tested.
Hey Bart,
Looks good,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Make nvme_requeue_req() check BLK_MQ_S_STOPPED instead of
QUEUE_FLAG_STOPPED. Remove the QUEUE_FLAG_STOPPED manipulations
that became superfluous because of this change. This patch fixes
a race condition: using queue_flag_clear_unlocked() is not safe
if any other function that manipulates the
Avoid that nvme_queue_rq() is still running when nvme_stop_queues()
returns. Untested.
Signed-off-by: Bart Van Assche <bart.vanass...@sandisk.com>
Cc: Keith Busch <keith.bu...@intel.com>
Cc: Christoph Hellwig <h...@lst.de>
Cc: Sagi Grimberg <s...@grimberg.me>
Bar
+static void srp_mq_wait_for_queuecommand(struct Scsi_Host *shost)
+{
+ struct scsi_device *sdev;
+ struct request_queue *q;
+
+ shost_for_each_device(sdev, shost) {
+ q = sdev->request_queue;
+
+ blk_mq_quiesce_queue(q);
+
Looks fine,
Reviewed-by: Christoph Hellwig
Hey James,
Can you collect review tags, address CR comments and resend the series?
I'd like to stage these for 0-day testing and try to get it into 4.9.
Thanks,
Sagi.
--
To unsubscribe from this list: send the line "unsubscribe
Hi,
Hi Baharat,
In iSER target during iwarp connection tear-down due to ping timeouts, the rdma
queues are set to error state and subsequent queued iscsi session commands
posted
shall fail with corresponding errno returned by ib_post_send/recv.
At this stage iser target handlers (Ex:
Looks good, for the series:
Reviewed-by: Sagi Grimbeg
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Let me see if I get this correct:
4.5.0_rc3_1aaa57f5_00399
sdc;10.218.128.17;4627942;1156985;18126
sdf;10.218.202.17;4590963;1147740;18272
sdk;10.218.203.17;4564980;1141245;18376
sdn;10.218.204.17;4571946;1142986;18348
sdd;10.219.128.17;4591717;1147929;18269
202219;17444
Thanks for the suggestions, I'll work to get some of the requested
data back to you guys quickly.
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Tue, Jun 21, 2016 at 7:08 AM, Sagi Grimberg <sagig...@gmail.com> wrote:
Hey Robert,
I narrowed the performance degradation to this series
7861728..5e47f19, but while trying to bisect it, the changes were
erratic between each commit that I could not figure out exactly which
introduced the issue. If someone could give me some pointers on what
to do, I can keep trying
*) Extensible to multiple types of backend drivers.
nvme-target needs a way to absorb new backend drivers, that
does not effect existing configfs group layout or attributes.
Looking at the nvmet/configfs layout as-is, there are no multiple
backend types defined, nor a way to control backend
Hey Dan,
Hello Sagi Grimberg,
The patch a73c2a2f9123: "libiscsi: Use scsi helper to set information
descriptor" from Jul 15, 2015, leads to the following static checker
warning:
drivers/scsi/libiscsi.c:858 iscsi_scsi_cmd_rsp()
error: XXX uninitialized symbol 'sector'
On 09/04/16 16:11, Varun Prakash wrote:
Add two callbacks to struct iscsit_transport -
1. void *(*iscsit_alloc_pdu)()
iscsi-target uses this callback for
iSCSI PDU allocation.
2. void (*iscsit_free_pdu)
iscsi-target uses this callback
to free an iSCSI PDU which was
Add void (*iscsit_get_r2t_ttt)() to
struct iscsit_transport, iscsi-target
uses this callback to get
r2t->targ_xfer_tag.
Your driver allocates ttt's? That looks like bad
layering to me. This definitely deserves an explanation...
cxgbit.ko allocates ttt only for r2t pdus to do Direct Data
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Acked-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
In iser we sorta rely on 4k pages so we avoid
PAGE_SIZE but rather set SIZE_4K for these sort
of things (like we did in the virt_boundary).
So you still want only 4k segments even on PPC where the PAGE_SIZE is
16k?
Yes, iSER has the "no-gaps" constraint (like nvme) and some
applications in
So that we don't overflow the number of MR segments allocated because
we have to split on SGL segment into multiple MR segments.
Signed-off-by: Christoph Hellwig
---
drivers/infiniband/ulp/iser/iscsi_iser.c | 1 +
1 file changed, 1 insertion(+)
diff --git
e>
Signed-off-by: Ming Lin <min...@ssi.samsung.com>
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
h any code for which Sagi is the
maintainer so I think my ack for the ib_srp changes is sufficient.
Indeed no need for my ack, but a review tag can't hurt ;)
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi
From: Ming Lin <min...@ssi.samsung.com>
Rename SCSI specific struct and functions to more genenic names.
Reviewed-by: Christoph Hellwig <h...@lst.de>
Signed-off-by: Ming Lin <min...@ssi.samsung.com>
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from
From: Ming Lin <min...@ssi.samsung.com>
Parameter "bool mq" is block driver specific.
Change it to "first_chunk" to make it more generic.
Reviewed-by: Christoph Hellwig <h...@lst.de>
Signed-off-by: Ming Lin <min...@ssi.samsung.com>
Review
From: Ming Lin <min...@ssi.samsung.com>
Replace parameter "struct scsi_data_buffer" with "struct sg_table" in
SG alloc/free functions to make them generic.
Reviewed-by: Christoph Hellwig <h...@lst.de>
Signed-off-by: Ming Lin <min...@ssi.samsung.c
We're still missing an ack from Sagi.
Oops, wasn't aware that mine was needed.
I reviewed these on the nvme-fabrics project.
I'll add my review tags too.
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More
Hey Willy,
- Interrupt steering needs to be controlled by block-mq instead of
the driver. It's pointless to have each driver implement its own
policies on interrupt steering, irqbalanced remains a source of
end-user frustration, and block-mq can change the queue<->cpu mapping
Fixes should go in the front of the series (probably even better
detached from the set), and this also looks like stable material...
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
+static ssize_t lio_target_np_hw_offload_show(struct config_item *item,
+char *page)
+{
+ struct iscsi_tpg_np *tpg_np = to_iscsi_tpg_np(item);
+ struct iscsi_tpg_np *tpg_np_hw_offload;
+ ssize_t rb;
+
+ tpg_np_hw_offload =
Looks fine,
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Add void (*iscsit_get_r2t_ttt)() to
struct iscsit_transport, iscsi-target
uses this callback to get
r2t->targ_xfer_tag.
Your driver allocates ttt's? That looks like bad
layering to me. This definitely deserves an explanation...
--
To unsubscribe from this list: send the line "unsubscribe
Add int (*iscsit_validate_params)() to
struct iscsit_transport, iscsi-target
uses this callback for validating
conn operational parameters.
Again, why is this needed?
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
Add void (*iscsit_get_rx_pdu)() to
struct iscsit_transport, iscsi-target
uses this callback to receive and
process Rx iSCSI PDUs.
Same comment on change logs.
The iser bit looks harmless
Acked-by: Sagi Grimberg <s...@grimber.me>
Though I agree with hch that we don't really need
rx t
Add void (*iscsit_release_cmd)() to
struct iscsit_transport, iscsi-target
uses this callback to release transport
driver resources associated with an iSCSI cmd.
I'd really like to see some reasoning on why you add
abstraction callouts. It may have a valid reason but
it needs to be documented
On 09/04/16 16:11, Varun Prakash wrote:
Add int (*iscsit_xmit_datain_pdu)() to
struct iscsit_transport, iscsi-target
uses this callback to transmit a DATAIN
iSCSI PDU.
Signed-off-by: Varun Prakash
---
drivers/target/iscsi/iscsi_target.c| 143
Nice!
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 09/04/16 16:11, Varun Prakash wrote:
Add two callbacks to struct iscsit_transport -
1. void *(*iscsit_alloc_pdu)()
iscsi-target uses this callback for
iSCSI PDU allocation.
2. void (*iscsit_free_pdu)
iscsi-target uses this callback
to free an iSCSI PDU which was
Great. Just curious how -v2 is coming along..?
I've got a few cycles over the weekend, and plan to start reviewing as
the series hits the list.
Btw, I asked Sagi to help out with review as well.
If you did, it's lost in my old email address :)
I'll have a look this week.
Cheers,
Sagi.
--
Acked-by: Sagi Grimberg <sa...@mellanox.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
1. void (*iscsit_rx_pdu)(struct iscsi_conn *);
Rx thread uses this for receiving and processing
iSCSI PDU in full feature phase.
Is iscsit_rx_pdu the best name for this? it sounds like
a function that would handle A pdu, but it's actually the
thread function dequeuing pdus correct?
--
1 - 100 of 519 matches
Mail list logo