[PATCH] iser: set sector for ambiguous mr status errors

2018-11-14 Thread Sagi Grimberg
If for some reason we failed to query the mr status, we need to make sure to provide sufficient information for an ambiguous error (guard error on sector 0). Fixes: 0a7a08ad6f5f ("IB/iser: Implement check_protection") Cc: Reported-by: Dan Carpenter Signed-off-by: Sagi Grimberg --

Re: [PATCH v2 08/15] nvmet: make config_item_type const

2017-10-17 Thread Sagi Grimberg
Acked-by: Sagi Grimberg <s...@grimberg.me>

Re: tgtd CPU 100% problem

2017-07-11 Thread Sagi Grimberg
Thanks very much for reply. I think it may be better to assert failed to exit problem than run the endless loop after receivered DEVICE_REMOVAL event. Or we can sleep 5 ms to check if the conn->h.state is STATE_FULL. Note that neither of the CC'd lists are the correct list for stgt, you

Re: tgtd CPU 100% problem

2017-07-11 Thread Sagi Grimberg
On 11/07/17 10:51, 李春 wrote: We have meet a problem of tgtd CPU 100%. the infinband network card was negotiate as eth mode by mistake, after we change it to ib mode and restart opensmd for correct State(Active) the tgtd using 100% of CPU. and when we connect to it using tgtadm, tgtadm hang

Re: ["PATCH-v2" 00/22] lpfc updates for 11.2.0.12

2017-04-20 Thread Sagi Grimberg
The patches are dependent on the FC nvme/nvmet patches from the following 2 series: http://lists.infradead.org/pipermail/linux-nvme/2017-April/009250.html http://lists.infradead.org/pipermail/linux-nvme/2017-April/009256.html Hmm, So it seems that we have conflicts here A local merge

Re: [RFC 6/8] nvmet: Be careful about using iomem accesses when dealing with p2pmem

2017-04-10 Thread Sagi Grimberg
Sagi As long as legA, legB and the RC are all connected to the same switch then ordering will be preserved (I think many other topologies also work). Here is how it would work for the problem case you are concerned about (which is a read from the NVMe drive). 1. Disk device DMAs out the

Re: [RFC 3/8] nvmet: Use p2pmem in nvme target

2017-04-05 Thread Sagi Grimberg
I hadn't done this yet but I think a simple closest device in the tree would solve the issue sufficiently. However, I originally had it so the user has to pick the device and I prefer that approach. But if the user picks the device, then why bother restricting what he picks? Because the user

Re: [RFC 6/8] nvmet: Be careful about using iomem accesses when dealing with p2pmem

2017-04-05 Thread Sagi Grimberg
Note that the nvme completion queues are still on the host memory, so this means we have lost the ordering between data and completions as they go to different pcie targets. Hmm, in this simple up/down case with a switch, I think it might actually be OK. Transactions might not complete at

Re: [PATCH 2/5] nvme: cleanup nvme_req_needs_retry

2017-04-05 Thread Sagi Grimberg
Reviewed-by: Sagi Grimberg <s...@grimberg.me>

Re: [PATCH 4/5] nvme: move the retries count to struct nvme_request

2017-04-05 Thread Sagi Grimberg
Reviewed-by: Sagi Grimberg <s...@grimberg.me>

Re: [PATCH 3/5] nvme: mark nvme_max_retries static

2017-04-05 Thread Sagi Grimberg
Reviewed-by: Sagi Grimberg <s...@grimberg.me>

Re: [PATCH 1/5] nvme: move ->retries setup to nvme_setup_cmd

2017-04-05 Thread Sagi Grimberg
Reviewed-by: Sagi Grimberg <s...@grimberg.me>

Re: [PATCH 5/5] block, scsi: move the retries field to struct scsi_request

2017-04-05 Thread Sagi Grimberg
Reviewed-by: Sagi Grimberg <s...@grimberg.me>

Re: [RFC 6/8] nvmet: Be careful about using iomem accesses when dealing with p2pmem

2017-04-04 Thread Sagi Grimberg
u16 nvmet_copy_to_sgl(struct nvmet_req *req, off_t off, const void *buf, size_t len) { - if (sg_pcopy_from_buffer(req->sg, req->sg_cnt, buf, len, off) != len) + bool iomem = req->p2pmem; + size_t ret; + + ret = sg_copy_buffer(req->sg, req->sg_cnt, (void

Re: [RFC 4/8] p2pmem: Add debugfs "stats" file

2017-04-04 Thread Sagi Grimberg
+ p2pmem_debugfs_root = debugfs_create_dir("p2pmem", NULL); + if (!p2pmem_debugfs_root) + pr_info("could not create debugfs entry, continuing\n"); + Why continue? I think it'd be better to just fail it. Besides, this can be safely squashed into patch 1.

Re: [RFC 2/8] cxgb4: setup pcie memory window 4 and create p2pmem region

2017-04-04 Thread Sagi Grimberg
+static void setup_memwin_p2pmem(struct adapter *adap) +{ + unsigned int mem_base = t4_read_reg(adap, CIM_EXTMEM2_BASE_ADDR_A); + unsigned int mem_size = t4_read_reg(adap, CIM_EXTMEM2_ADDR_SIZE_A); + + if (!use_p2pmem) + return; This is weird, why even call

Re: [RFC 3/8] nvmet: Use p2pmem in nvme target

2017-04-04 Thread Sagi Grimberg
Hey Logan, We create a configfs attribute in each nvme-fabrics target port to enable p2p memory use. When enabled, the port will only then use the p2p memory if a p2p memory device can be found which is behind the same switch as the RDMA port and all the block devices in use. If the user

Re: [PATCH] lpfc: add missing Kconfig NVME dependencies

2017-02-22 Thread Sagi Grimberg
add missing Kconfig NVME dependencies Can't believe I missed posting this -- james Heh, the this sort of comment should come after the '---' separator (below) unless you want it to live forever in the git log... Signed-off-by: James Smart --- [here]

Re: hch's native NVMe multipathing [was: Re: [PATCH 1/2] Don't blacklist nvme]

2017-02-16 Thread Sagi Grimberg
I'm fine with the path selectors getting moved out; maybe it'll encourage new path selectors to be developed. But there will need to be some userspace interface stood up to support your native NVMe multipathing (you may not think it needed but think in time there will be a need to configure

Re: [PATCH v3 00/16] lpfc: Add NVME Fabrics support

2017-02-15 Thread Sagi Grimberg
Hi James, This patch set adds support for NVME over Fabrics FC transport to lpfc The internals of the driver are reworked to support being either: a SCSI initiator; a NVME intiator; both a SCSI initiator and a NVME initiator; or a NVME target. The driver effectively has parallel NVME and

Re: [PATCH 1/1] iscsi: fix regression caused by session lock patch

2017-02-06 Thread Sagi Grimberg
Hey Chris and Guilherme, I'm indeed not responsive under this email address. Thanks for the testing, looks like you have the magic target to reproduce this. I think this verifies what Mike's idea of what was going wrong, and we're way overdue to get this fixed upstream. Thanks to IBM for

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-19 Thread Sagi Grimberg
Christoph suggest to me once that we can take a hybrid approach where we consume a small amount of completions (say 4) right away from the interrupt handler and if we have more we schedule irq-poll to reap the rest. But back then it didn't work better which is not aligned with my observations

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-19 Thread Sagi Grimberg
I think you missed: http://git.infradead.org/nvme.git/commit/49c91e3e09dc3c9dd1718df85112a8cce3ab7007 I indeed did, thanks. But it doesn't help. We're still having to wait for the first interrupt, and if we're really fast that's the only completion we have to process. Try this: diff

Re: [PATCH 3/4] nvme: use blk_rq_payload_bytes

2017-01-18 Thread Sagi Grimberg
@@ -1014,9 +1013,9 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue, } Christoph, a little above here we still look at blk_rq_bytes(), shouldn't that look at blk_rq_payload_bytes() too? The check is ok for now as it's just zero vs non-zero. It's somewhat broken for

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-18 Thread Sagi Grimberg
Hannes just spotted this: static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx, const struct blk_mq_queue_data *bd) { [...] __nvme_submit_cmd(nvmeq, ); nvme_process_cq(nvmeq); spin_unlock_irq(>q_lock); return BLK_MQ_RQ_QUEUE_OK;

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-18 Thread Sagi Grimberg
Your report provided this stats with one-completion dominance for the single-threaded case. Does it also hold if you run multiple fio threads per core? It's useless to run more threads on that core, it's already fully utilized. That single threads is already posting a fair amount of

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-18 Thread Sagi Grimberg
So what you say is you saw a consomed == 1 [1] most of the time? [1] from http://git.infradead.org/nvme.git/commitdiff/eed5a9d925c59e43980047059fde29e3aa0b7836 Exactly. By processing 1 completion per interrupt it makes perfect sense why this performs poorly, it's not worth paying the

Re: [PATCH 3/4] nvme: use blk_rq_payload_bytes

2017-01-17 Thread Sagi Grimberg
@@ -1014,9 +1013,9 @@ static int nvme_rdma_map_data(struct nvme_rdma_queue *queue, } Christoph, a little above here we still look at blk_rq_bytes(), shouldn't that look at blk_rq_payload_bytes() too? if (count == 1) { - if (rq_data_dir(rq) == WRITE && -

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-17 Thread Sagi Grimberg
So it looks like we are super not efficient because most of the times we catch 1 completion per interrupt and the whole point is that we need to find more! This fio is single threaded with QD=32 so I'd expect that we be somewhere in 8-31 almost all the time... I also

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-17 Thread Sagi Grimberg
Just for the record, all tests you've run are with the upper irq_poll_budget of 256 [1]? Yes, but that's the point, I never ever reach this budget because I'm only processing 1-2 completions per interrupt. We (Hannes and me) recently stumbed accross this when trying to poll for more than

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-17 Thread Sagi Grimberg
Oh, and the current code that was tested can be found at: git://git.infradead.org/nvme.git nvme-irqpoll -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-17 Thread Sagi Grimberg
Hey, so I made some initial analysis of whats going on with irq-poll. First, I sampled how much time it takes before we get the interrupt in nvme_irq and the initial visit to nvme_irqpoll_handler. I ran a single threaded fio with QD=32 of 4K reads. This is two displays of a histogram of the

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-17 Thread Sagi Grimberg
-- [1] queue = b'nvme0q1' usecs : count distribution 0 -> 1 : 7310 || 2 -> 3 : 11 | | 4 -> 7 : 10 | | 8 -> 15 : 20 | |

Re: [PATCH 4/4] sd: remove __data_len hack for WRITE SAME

2017-01-13 Thread Sagi Grimberg
Looks good, Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 3/4] nvme: use blk_rq_payload_bytes

2017-01-13 Thread Sagi Grimberg
This looks good, Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 2/4] scsi: use blk_rq_payload_bytes

2017-01-13 Thread Sagi Grimberg
Looks good, Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 1/4] block: add blk_rq_payload_bytes

2017-01-13 Thread Sagi Grimberg
Add a helper to calculate the actual data transfer size for special payload requests. Signed-off-by: Christoph Hellwig --- include/linux/blkdev.h | 13 + 1 file changed, 13 insertions(+) diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index

Re: [Lsf-pc] [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-12 Thread Sagi Grimberg
**Note: when I ran multiple threads on more cpus the performance degradation phenomenon disappeared, but I tested on a VM with qemu emulation backed by null_blk so I figured I had some other bottleneck somewhere (that's why I asked for some more testing). That could be because of the vmexits

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-12 Thread Sagi Grimberg
I agree with Jens that we'll need some analysis if we want the discussion to be affective, and I can spend some time this if I can find volunteers with high-end nvme devices (I only have access to client nvme devices. I have a P3700 but somehow burned the FW. Let me see if I can bring it back

Re: [Lsf-pc] [LFS/MM TOPIC][LFS/MM ATTEND]: - Storage Stack and Driver Testing methodology.

2017-01-12 Thread Sagi Grimberg
Hi Folks, I would like to propose a general discussion on Storage stack and device driver testing. I think its very useful and needed. Purpose:- - The main objective of this discussion is to address the need for a Unified Test Automation Framework which can be used by

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-12 Thread sagi grimberg
A typical Ethernet network adapter delays the generation of an interrupt after it has received a packet. A typical block device or HBA does not delay the generation of an interrupt that reports an I/O completion. >>> >>> NVMe allows for configurable interrupt coalescing,

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-12 Thread Sagi Grimberg
I'd like to attend LSF/MM and would like to discuss polling for block drivers. Currently there is blk-iopoll but it is neither as widely used as NAPI in the networking field and accoring to Sagi's findings in [1] performance with polling is not on par with IRQ usage. On LSF/MM I'd like to

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-12 Thread Sagi Grimberg
Hi all, I'd like to attend LSF/MM and would like to discuss polling for block drivers. Currently there is blk-iopoll but it is neither as widely used as NAPI in the networking field and accoring to Sagi's findings in [1] performance with polling is not on par with IRQ usage. On LSF/MM I'd

Re: [PATCH 04/12] target: avoid to access .bi_vcnt directly

2016-11-12 Thread Sagi Grimberg
Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH v5 12/14] SRP transport, scsi-mq: Wait for .queue_rq() if necessary

2016-11-01 Thread Sagi Grimberg
and again, Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH v5 13/14] nvme: Fix a race condition related to stopping queues

2016-11-01 Thread Sagi Grimberg
Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH v5 11/14] SRP transport: Move queuecommand() wait code to SCSI core

2016-11-01 Thread Sagi Grimberg
Again, Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH v5 08/14] blk-mq: Add a kick_requeue_list argument to blk_mq_requeue_request()

2016-11-01 Thread Sagi Grimberg
Looks useful, Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH v5 07/14] blk-mq: Introduce blk_mq_quiesce_queue()

2016-11-01 Thread Sagi Grimberg
Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH v5 06/14] blk-mq: Remove blk_mq_cancel_requeue_work()

2016-11-01 Thread Sagi Grimberg
Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH v5 05/14] blk-mq: Avoid that requeueing starts stopped queues

2016-11-01 Thread Sagi Grimberg
Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 2/3] scsi: allow LLDDs to expose the queue mapping to blk-mq

2016-11-01 Thread Sagi Grimberg
Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH] Avoid that SCSI device removal through sysfs triggers a deadlock

2016-10-27 Thread Sagi Grimberg
Hey Bart, The solution I prefer is to modify the SCSI scanning code such that the scan_mutex is only held while performing the actual LUN scanning and while ensuring that no SCSI device has been created yet for a certain LUN number but not while the Linux device and its sysfs attributes are

Re: [PATCH 11/12] nvme: Use BLK_MQ_S_STOPPED instead of QUEUE_FLAG_STOPPED in blk-mq code

2016-10-27 Thread Sagi Grimberg
Looks good, Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 01/12] blk-mq: Do not invoke .queue_rq() for a stopped queue

2016-10-27 Thread Sagi Grimberg
Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: A question regarding "multiple SGL"

2016-10-27 Thread Sagi Grimberg
Hi Robert, Hey Robert, Christoph, please explain your use cases that isn't handled. The one and only reason to set MSDBD to 1 is to make the code a lot simpler given that there is no real use case for supporting more. RDMA uses memory registrations to register large and possibly

Re: [PATCH 02/12] blk-mq: Introduce blk_mq_hctx_stopped()

2016-10-27 Thread Sagi Grimberg
Looks fine, Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 04/12] blk-mq: Move more code into blk_mq_direct_issue_request()

2016-10-27 Thread Sagi Grimberg
Looks good, Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 10/12] SRP transport, scsi-mq: Wait for .queue_rq() if necessary

2016-10-27 Thread Sagi Grimberg
Thanks for moving it, Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 09/12] SRP transport: Move queuecommand() wait code to SCSI core

2016-10-27 Thread Sagi Grimberg
Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH v2 4/7] blk-mq: Introduce blk_quiesce_queue() and blk_resume_queue()

2016-10-05 Thread Sagi Grimberg
Hello Ming, Can you have a look at the attached patch? That patch uses an srcu read lock for all queue types, whether or not the BLK_MQ_F_BLOCKING flag has been set. Additionally, I have dropped the QUEUE_FLAG_QUIESCING flag. Just like previous versions, this patch has been tested. Hey Bart,

Re: [PATCH v2 4/7] blk-mq: Introduce blk_quiesce_queue() and blk_resume_queue()

2016-10-05 Thread Sagi Grimberg
Hello Ming, Can you have a look at the attached patch? That patch uses an srcu read lock for all queue types, whether or not the BLK_MQ_F_BLOCKING flag has been set. Additionally, I have dropped the QUEUE_FLAG_QUIESCING flag. Just like previous versions, this patch has been tested. Hey Bart,

Re: [PATCH v2 1/7] blk-mq: Introduce blk_mq_queue_stopped()

2016-10-05 Thread Sagi Grimberg
Looks good, Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH v2 3/7] [RFC] nvme: Use BLK_MQ_S_STOPPED instead of QUEUE_FLAG_STOPPED in blk-mq code

2016-10-05 Thread Sagi Grimberg
Make nvme_requeue_req() check BLK_MQ_S_STOPPED instead of QUEUE_FLAG_STOPPED. Remove the QUEUE_FLAG_STOPPED manipulations that became superfluous because of this change. This patch fixes a race condition: using queue_flag_clear_unlocked() is not safe if any other function that manipulates the

Re: [PATCH v2 7/7] [RFC] nvme: Fix a race condition

2016-10-05 Thread Sagi Grimberg
Avoid that nvme_queue_rq() is still running when nvme_stop_queues() returns. Untested. Signed-off-by: Bart Van Assche <bart.vanass...@sandisk.com> Cc: Keith Busch <keith.bu...@intel.com> Cc: Christoph Hellwig <h...@lst.de> Cc: Sagi Grimberg <s...@grimberg.me> Bar

Re: [PATCH v2 6/7] SRP transport: Port srp_wait_for_queuecommand() to scsi-mq

2016-10-05 Thread Sagi Grimberg
+static void srp_mq_wait_for_queuecommand(struct Scsi_Host *shost) +{ + struct scsi_device *sdev; + struct request_queue *q; + + shost_for_each_device(sdev, shost) { + q = sdev->request_queue; + + blk_mq_quiesce_queue(q); +

Re: [PATCH 1/5] nvme-fabrics: Add FC transport FC-NVME definitions

2016-08-16 Thread Sagi Grimberg
Looks fine, Reviewed-by: Christoph Hellwig Hey James, Can you collect review tags, address CR comments and resend the series? I'd like to stage these for 0-day testing and try to get it into 4.9. Thanks, Sagi. -- To unsubscribe from this list: send the line "unsubscribe

Re: IB/isert: Return value of iser target transport handlers ignored by iscsi target

2016-08-07 Thread Sagi Grimberg
Hi, Hi Baharat, In iSER target during iwarp connection tear-down due to ping timeouts, the rdma queues are set to error state and subsequent queued iscsi session commands posted shall fail with corresponding errno returned by ib_post_send/recv. At this stage iser target handlers (Ex:

Re: [PATCH v2 1/3] block: provide helpers for reading block count

2016-06-23 Thread Sagi Grimberg
Looks good, for the series: Reviewed-by: Sagi Grimbeg -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Connect-IB not performing as well as ConnectX-3 with iSER

2016-06-22 Thread Sagi Grimberg
Let me see if I get this correct: 4.5.0_rc3_1aaa57f5_00399 sdc;10.218.128.17;4627942;1156985;18126 sdf;10.218.202.17;4590963;1147740;18272 sdk;10.218.203.17;4564980;1141245;18376 sdn;10.218.204.17;4571946;1142986;18348 sdd;10.219.128.17;4591717;1147929;18269

Re: Connect-IB not performing as well as ConnectX-3 with iSER

2016-06-22 Thread Sagi Grimberg
202219;17444 Thanks for the suggestions, I'll work to get some of the requested data back to you guys quickly. Robert LeBlanc PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 On Tue, Jun 21, 2016 at 7:08 AM, Sagi Grimberg <sagig...@gmail.com> wrote:

Re: Connect-IB not performing as well as ConnectX-3 with iSER

2016-06-21 Thread Sagi Grimberg
Hey Robert, I narrowed the performance degradation to this series 7861728..5e47f19, but while trying to bisect it, the changes were erratic between each commit that I could not figure out exactly which introduced the issue. If someone could give me some pointers on what to do, I can keep trying

Re: NVMe over Fabrics target implementation

2016-06-08 Thread Sagi Grimberg
*) Extensible to multiple types of backend drivers. nvme-target needs a way to absorb new backend drivers, that does not effect existing configfs group layout or attributes. Looking at the nvmet/configfs layout as-is, there are no multiple backend types defined, nor a way to control backend

Re: libiscsi: Use scsi helper to set information descriptor

2016-04-13 Thread Sagi Grimberg
Hey Dan, Hello Sagi Grimberg, The patch a73c2a2f9123: "libiscsi: Use scsi helper to set information descriptor" from Jul 15, 2015, leads to the following static checker warning: drivers/scsi/libiscsi.c:858 iscsi_scsi_cmd_rsp() error: XXX uninitialized symbol 'sector'

Re: [PATCH v2 01/16] iscsi-target: add callback to alloc and free PDU

2016-04-13 Thread Sagi Grimberg
On 09/04/16 16:11, Varun Prakash wrote: Add two callbacks to struct iscsit_transport - 1. void *(*iscsit_alloc_pdu)() iscsi-target uses this callback for iSCSI PDU allocation. 2. void (*iscsit_free_pdu) iscsi-target uses this callback to free an iSCSI PDU which was

Re: [PATCH v2 08/16] iscsi-target: add void (*iscsit_get_r2t_ttt)()

2016-04-13 Thread Sagi Grimberg
Add void (*iscsit_get_r2t_ttt)() to struct iscsit_transport, iscsi-target uses this callback to get r2t->targ_xfer_tag. Your driver allocates ttt's? That looks like bad layering to me. This definitely deserves an explanation... cxgbit.ko allocates ttt only for r2t pdus to do Direct Data

Re: [PATCH 1/2] scsi: add a max_segment_size limitation to struct Scsi_Host

2016-04-13 Thread Sagi Grimberg
Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 2/2] IB/iser: set max_segment_size

2016-04-13 Thread Sagi Grimberg
Acked-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH 2/2] IB/iser: set max_segment_size

2016-04-13 Thread Sagi Grimberg
In iser we sorta rely on 4k pages so we avoid PAGE_SIZE but rather set SIZE_4K for these sort of things (like we did in the virt_boundary). So you still want only 4k segments even on PPC where the PAGE_SIZE is 16k? Yes, iSER has the "no-gaps" constraint (like nvme) and some applications in

Re: [PATCH 2/2] IB/iser: set max_segment_size

2016-04-12 Thread Sagi Grimberg
So that we don't overflow the number of MR segments allocated because we have to split on SGL segment into multiple MR segments. Signed-off-by: Christoph Hellwig --- drivers/infiniband/ulp/iser/iscsi_iser.c | 1 + 1 file changed, 1 insertion(+) diff --git

Re: [PATCH v3 5/5] lib: scatterlist: move SG pool code from SCSI driver to lib/sg_pool.c

2016-04-12 Thread Sagi Grimberg
e> Signed-off-by: Ming Lin <min...@ssi.samsung.com> Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH v3 4/5] scsi: rename SCSI_MAX_{SG, SG_CHAIN}_SEGMENTS

2016-04-12 Thread Sagi Grimberg
h any code for which Sagi is the maintainer so I think my ack for the ib_srp changes is sufficient. Indeed no need for my ack, but a review tag can't hurt ;) Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi

Re: [PATCH v3 3/5] scsi: rename SG related struct and functions

2016-04-12 Thread Sagi Grimberg
From: Ming Lin <min...@ssi.samsung.com> Rename SCSI specific struct and functions to more genenic names. Reviewed-by: Christoph Hellwig <h...@lst.de> Signed-off-by: Ming Lin <min...@ssi.samsung.com> Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from

Re: [PATCH v3 2/5] scsi: replace "mq" with "first_chunk" in SG functions

2016-04-12 Thread Sagi Grimberg
From: Ming Lin <min...@ssi.samsung.com> Parameter "bool mq" is block driver specific. Change it to "first_chunk" to make it more generic. Reviewed-by: Christoph Hellwig <h...@lst.de> Signed-off-by: Ming Lin <min...@ssi.samsung.com> Review

Re: [PATCH v3 1/5] scsi: replace "scsi_data_buffer" with "sg_table" in SG functions

2016-04-12 Thread Sagi Grimberg
From: Ming Lin <min...@ssi.samsung.com> Replace parameter "struct scsi_data_buffer" with "struct sg_table" in SG alloc/free functions to make them generic. Reviewed-by: Christoph Hellwig <h...@lst.de> Signed-off-by: Ming Lin <min...@ssi.samsung.c

Re: [PATCH v3 4/5] scsi: rename SCSI_MAX_{SG, SG_CHAIN}_SEGMENTS

2016-04-12 Thread Sagi Grimberg
We're still missing an ack from Sagi. Oops, wasn't aware that mine was needed. I reviewed these on the nvme-fabrics project. I'll add my review tags too. -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More

Re: [Lsf] [LSF/MM TOPIC] block-mq issues with FC

2016-04-10 Thread Sagi Grimberg
Hey Willy, - Interrupt steering needs to be controlled by block-mq instead of the driver. It's pointless to have each driver implement its own policies on interrupt steering, irqbalanced remains a source of end-user frustration, and block-mq can change the queue<->cpu mapping

Re: [PATCH v2 15/16] iscsi-target: fix seq_end_offset calculation

2016-04-10 Thread Sagi Grimberg
Fixes should go in the front of the series (probably even better detached from the set), and this also looks like stable material... -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at

Re: [PATCH v2 11/16] iscsi-target: add new offload transport type

2016-04-10 Thread Sagi Grimberg
+static ssize_t lio_target_np_hw_offload_show(struct config_item *item, +char *page) +{ + struct iscsi_tpg_np *tpg_np = to_iscsi_tpg_np(item); + struct iscsi_tpg_np *tpg_np_hw_offload; + ssize_t rb; + + tpg_np_hw_offload =

Re: [PATCH v2 10/16] iscsi-target: use conn->network_transport in text rsp

2016-04-10 Thread Sagi Grimberg
Looks fine, Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH v2 08/16] iscsi-target: add void (*iscsit_get_r2t_ttt)()

2016-04-10 Thread Sagi Grimberg
Add void (*iscsit_get_r2t_ttt)() to struct iscsit_transport, iscsi-target uses this callback to get r2t->targ_xfer_tag. Your driver allocates ttt's? That looks like bad layering to me. This definitely deserves an explanation... -- To unsubscribe from this list: send the line "unsubscribe

Re: [PATCH v2 07/16] iscsi-target: add int (*iscsit_validate_params)()

2016-04-10 Thread Sagi Grimberg
Add int (*iscsit_validate_params)() to struct iscsit_transport, iscsi-target uses this callback for validating conn operational parameters. Again, why is this needed? -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org

Re: [PATCH v2 05/16] iscsi-target: add void (*iscsit_get_rx_pdu)()

2016-04-10 Thread Sagi Grimberg
Add void (*iscsit_get_rx_pdu)() to struct iscsit_transport, iscsi-target uses this callback to receive and process Rx iSCSI PDUs. Same comment on change logs. The iser bit looks harmless Acked-by: Sagi Grimberg <s...@grimber.me> Though I agree with hch that we don't really need rx t

Re: [PATCH v2 04/16] iscsi-target: add void (*iscsit_release_cmd)()

2016-04-10 Thread Sagi Grimberg
Add void (*iscsit_release_cmd)() to struct iscsit_transport, iscsi-target uses this callback to release transport driver resources associated with an iSCSI cmd. I'd really like to see some reasoning on why you add abstraction callouts. It may have a valid reason but it needs to be documented

Re: [PATCH v2 03/16] iscsi-target: add int (*iscsit_xmit_datain_pdu)()

2016-04-10 Thread Sagi Grimberg
On 09/04/16 16:11, Varun Prakash wrote: Add int (*iscsit_xmit_datain_pdu)() to struct iscsit_transport, iscsi-target uses this callback to transmit a DATAIN iSCSI PDU. Signed-off-by: Varun Prakash --- drivers/target/iscsi/iscsi_target.c| 143

Re: [PATCH v2 02/16] iscsi-target: add int (*iscsit_xmit_pdu)()

2016-04-10 Thread Sagi Grimberg
Nice! Reviewed-by: Sagi Grimberg <s...@grimberg.me> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [PATCH v2 01/16] iscsi-target: add callback to alloc and free PDU

2016-04-10 Thread Sagi Grimberg
On 09/04/16 16:11, Varun Prakash wrote: Add two callbacks to struct iscsit_transport - 1. void *(*iscsit_alloc_pdu)() iscsi-target uses this callback for iSCSI PDU allocation. 2. void (*iscsit_free_pdu) iscsi-target uses this callback to free an iSCSI PDU which was

Re: [RFC 14/34] iscsi-target: export symbols

2016-04-10 Thread Sagi Grimberg
Great. Just curious how -v2 is coming along..? I've got a few cycles over the weekend, and plan to start reviewing as the series hits the list. Btw, I asked Sagi to help out with review as well. If you did, it's lost in my old email address :) I'll have a look this week. Cheers, Sagi. --

Re: [RFC 22/34] iscsi-target: call Rx thread function

2016-02-15 Thread Sagi Grimberg
Acked-by: Sagi Grimberg <sa...@mellanox.com> -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: [RFC 20/34] iscsi-target: update struct iscsit_transport definition

2016-02-15 Thread Sagi Grimberg
1. void (*iscsit_rx_pdu)(struct iscsi_conn *); Rx thread uses this for receiving and processing iSCSI PDU in full feature phase. Is iscsit_rx_pdu the best name for this? it sounds like a function that would handle A pdu, but it's actually the thread function dequeuing pdus correct? --

  1   2   3   4   5   6   >