Re: [PATCH] bsg: convert to use blk-mq
On 10/23/18 12:07 PM, Jens Axboe wrote: > On 10/23/18 11:40 AM, Benjamin Block wrote: >> On Mon, Oct 22, 2018 at 06:38:36AM -0600, Jens Axboe wrote: >>> On 10/22/18 4:03 AM, Benjamin Block wrote: On Fri, Oct 19, 2018 at 09:50:53AM -0600, Jens Axboe wrote: Ok so, that gets past the stage where we initialize the queues. Simple SCSI-I/O also seems to work, that is for example an INQUIRY(10), but transport commands that get passed to the driver break. Tried to send a FibreChannel GPN_FT (remote port discovery). As the BSG interface goes. This is a bidirectional command, that has both a buffer for the request and for the reply. AFAIR BSG will create a struct request for each of them. Protocol is BSG_PROTOCOL_SCSI, Subprotocol BSG_SUB_PROTOCOL_SCSI_TRANSPORT. The rest should be transparent till we get into the driver. First got this: [ 566.531100] BUG: sleeping function called from invalid context at mm/slab.h:421 [ 566.531452] in_atomic(): 1, irqs_disabled(): 0, pid: 3104, name: bsg_api_test [ 566.531460] 1 lock held by bsg_api_test/3104: [ 566.531466] #0: cb4b58e8 (rcu_read_lock){}, at: hctx_lock+0x30/0x118 [ 566.531498] Preemption disabled at: [ 566.531503] [<008175d0>] __blk_mq_delay_run_hw_queue+0x50/0x218 [ 566.531519] CPU: 3 PID: 3104 Comm: bsg_api_test Tainted: GW 4.19.0-rc6-bb-next+ #1 [ 566.531527] Hardware name: IBM 3906 M03 704 (LPAR) [ 566.531533] Call Trace: [ 566.531544] ([<001167fa>] show_stack+0x8a/0xd8) [ 566.531555] [<00bcc6d2>] dump_stack+0x9a/0xd8 [ 566.531565] [<00196410>] ___might_sleep+0x280/0x298 [ 566.531576] [<003e528c>] __kmalloc+0xbc/0x560 [ 566.531584] [<0083186a>] bsg_map_buffer+0x5a/0xb0 [ 566.531591] [<00831948>] bsg_queue_rq+0x88/0x118 [ 566.531599] [<0081ab56>] blk_mq_dispatch_rq_list+0x37e/0x670 [ 566.531607] [<0082050e>] blk_mq_do_dispatch_sched+0x11e/0x130 [ 566.531615] [<00820dfe>] blk_mq_sched_dispatch_requests+0x156/0x1a0 [ 566.531622] [<00817564>] __blk_mq_run_hw_queue+0x144/0x160 [ 566.531630] [<00817614>] __blk_mq_delay_run_hw_queue+0x94/0x218 [ 566.531638] [<008178b2>] blk_mq_run_hw_queue+0xda/0xf0 [ 566.531645] [<008211d8>] blk_mq_sched_insert_request+0x1a8/0x1e8 [ 566.531653] [<00811ee2>] blk_execute_rq_nowait+0x72/0x80 [ 566.531660] [<00811f66>] blk_execute_rq+0x76/0xb8 [ 566.531778] [<00830d0e>] bsg_ioctl+0x426/0x500 [ 566.531787] [<00440cb4>] do_vfs_ioctl+0x68c/0x710 [ 566.531794] [<00440dac>] ksys_ioctl+0x74/0xa0 [ 566.531801] [<00440e0a>] sys_ioctl+0x32/0x40 [ 566.531808] [<00bf1dd0>] system_call+0xd8/0x2d0 [ 566.531815] 1 lock held by bsg_api_test/3104: [ 566.531821] #0: cb4b58e8 (rcu_read_lock){}, at: hctx_lock+0x30/0x118 >>> >>> The first one is an easy fix, not sure how I missed that. The other >>> one I have no idea, any chance you could try with this one: >>> >>> http://git.kernel.dk/cgit/linux-block/commit/?h=mq-conversions=142dc9f36e3113b6a76d472978c33c8c2a2b702c >>> >>> which fixes the first one, and also corrects a wrong end_io call, >>> but I don't think that's the cause of the above. >>> >>> If it crashes, can you figure out where in the source that is? >>> Basically just do >>> >>> gdb vmlinux >>> l *zfcp_fc_exec_bsg_job+0x116 >>> >>> assuming that works fine on s390 :-) >>> >> >> So I tried 4.19.0 with only the two patches from you: >> http://git.kernel.dk/cgit/linux-block/commit/?h=mq-conversions=2b2ffa16193e9a69a076595ed64429b8cc9b42aa >> http://git.kernel.dk/cgit/linux-block/commit/?h=mq-conversions=142dc9f36e3113b6a76d472978c33c8c2a2b702c >> >> This fixed the first warning from before, as you suggested, but it still >> crash like this: >> >> [ ] Unable to handle kernel pointer dereference in virtual kernel address >> space >> [ ] Failing address: TEID: 0483 >> [ ] Fault in home space mode while using kernel ASCE. >> [ ] AS:025f0007 R3:dffb8007 S:dffbf000 >> P:013d >> [ ] Oops: 0004 ilc:3 [#1] PREEMPT SMP DEBUG_PAGEALLOC >> [ ] Modules linked in: >> [ ] CPU: 2 PID: 609 Comm: kworker/2:1H Kdump: loaded Tainted: GW >> 4.19.0-bb-next+ #1 >> [ ] Hardware name: IBM 3906 M03 704 (LPAR) >> [ ] Workqueue: kblockd blk_mq_run_work_fn >> [ ] Krnl PSW : 0704e0018000 03ff806a6b40 >> (zfcp_fc_exec_bsg_job+0x1c0/0x440 [zfcp]) >> [ ]R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3 >> [ ] Krnl GPRS: 83e0f3c0 >> 0300 >> [ ]0300
Re: [PATCH v3 2/5] qla2xxx_nvmet: Add files for FC-NVMe Target support
On 9/28/2018 3:46 PM, Himanshu Madhani wrote: + .target_features= NVMET_FCTGTFEAT_READDATA_RSP | + NVMET_FCTGTFEAT_CMD_IN_ISR | + NVMET_FCTGTFEAT_OPDONE_IN_ISR, Himanshu, I'm looking at these but had a quick question. Did you really want the IN_ISR flags set ? they schedule processing vs calling the nvmet routines inline. The intent was the queueing was only needed if in the hard isr routine. Last contact I had with your group said you were in soft isr routines and inline calling would be used. I'm asking because I had intended to remove these flags/features. -- james
Submit Proposals to the 2019 Linux Storage and Filesystems Conference!
After a one-year hiatus, the Linux Storage and Filesystems Conference (Vault) returns in 2019, under the sponsorship and organization of the USENIX Association. Vault brings together practitioners, implementers, users, and researchers working on storage in open source and related projects. We welcome creators and users of open source storage, file systems, and related technologies to submit their work and to join us for Vault '19, which will take place on February 25 - 26, 2019, in Boston, MA, USA, and will be co-located with the 17th USENIX Conference on File and Storage Technologies (FAST '19). Learn More about Vault '19: https://www.usenix.org/conference/vault19 Learn More about FAST '19: https://www.usenix.org/conference/fast19 We are looking for proposals on a diverse range of topics related to storage, Linux, and open source. The best talks will share your or your team's experience with a new technology, a new idea, a new approach, or inspire the audience to think beyond the ways they have always done things. We are also accepting proposals for a limited number of workshop sessions, where content can be more like a tutorial in nature or include hands-on participation by attendees. We encourage new speakers to submit talks as some of the most insightful talks often come from people with new experiences to share. Previous Vault events have drawn multiple hundreds of attendees from a range of companies, with backgrounds ranging from individual open source contributors, to new startups, through teams within the technology and storage giants, or storage end users. Talk and workshop proposals are due on Thursday, November 15, 2018. Please read through the Call for Participation for additional details, including topics of interest, and submission instructions. View the Vault '19 Call for Participation: https://www.usenix.org/conference/vault19/call-for-participation We look forward to receiving your proposals! Christoph Hellwig Erik Riedel Ric Wheeler, Red Hat vault19cha...@usenix.org
[PATCH] rescan-scsi-bus.sh: use LUN wildcard in idlist
By scanning for LUN 0 only, we may encounter a device that the kernel won't add (e.g. peripheral device type 31) and which may thus never appear in sysfs for us to use for REPORT LUNS. That causes LUN additions for such devices to be missed by "rescan-iscsi-bus.sh -a". Signed-off-by: Martin Wilck --- scripts/rescan-scsi-bus.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/scripts/rescan-scsi-bus.sh b/scripts/rescan-scsi-bus.sh index 6989208..a2aa8d8 100755 --- a/scripts/rescan-scsi-bus.sh +++ b/scripts/rescan-scsi-bus.sh @@ -376,7 +376,7 @@ idlist () oldlist=$(ls /sys/class/scsi_device/ | sed -n "s/${host}:${channel}:\([0-9]*:[0-9]*\)/\1/p" | uniq) # Rescan LUN 0 to check if we found new targets - echo "${channel} - 0" > /sys/class/scsi_host/host${host}/scan + echo "${channel} - -" > /sys/class/scsi_host/host${host}/scan newlist=$(ls /sys/class/scsi_device/ | sed -n "s/${host}:${channel}:\([0-9]*:[0-9]*\)/\1/p" | uniq) for newid in $newlist ; do oldid=$newid -- 2.19.1