From: Tejun Heo <[email protected]>

commit 8ba61435d73f2274e12d4d823fde06735e8f6a54 upstream.

blk_insert_cloned_request(), blk_execute_rq_nowait() and
blk_flush_plug_list() either didn't check whether the queue was dead
or did it without holding queue_lock.  Update them so that dead state
is checked while holding queue_lock.

AFAICS, this plugs all holes (requeue doesn't matter as the request is
transitioning atomically from in_flight to queued).

Signed-off-by: Tejun Heo <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
[bwh: Backported to 2.6.32:
 - Drop inapplicable changes to queue_unplugged() and
   blk_flush_plug_list()
 - We don't have blk_queue_dead() so open-code it
 - Adjust context]
Signed-off-by: Ben Hutchings <[email protected]>
---
This is not as complete as the upstream version.  I couldn't work out
where the 'dead' check belongs in the unplugging functions.  It does fix
the locking around the 'dead' check in blk_execute_rq_nowait() which was
added by "fix crash in scsi_dispatch_cmd()" in 2.6.32.61.

Ben.

 block/blk-core.c | 4 ++++
 block/blk-exec.c | 6 ++++--
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 4058f46..fc40ab9 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1651,6 +1651,10 @@ int blk_insert_cloned_request(struct request_queue *q, 
struct request *rq)
 #endif
 
        spin_lock_irqsave(q->queue_lock, flags);
+       if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
+               spin_unlock_irqrestore(q->queue_lock, flags);
+               return -ENODEV;
+       }
 
        /*
         * Submitting request must be dequeued before calling this function
diff --git a/block/blk-exec.c b/block/blk-exec.c
index 85bd7b4..ae0f2c7 100644
--- a/block/blk-exec.c
+++ b/block/blk-exec.c
@@ -50,7 +50,11 @@ void blk_execute_rq_nowait(struct request_queue *q, struct 
gendisk *bd_disk,
 {
        int where = at_head ? ELEVATOR_INSERT_FRONT : ELEVATOR_INSERT_BACK;
 
+       WARN_ON(irqs_disabled());
+       spin_lock_irq(q->queue_lock);
+
        if (unlikely(test_bit(QUEUE_FLAG_DEAD, &q->queue_flags))) {
+               spin_unlock_irq(q->queue_lock);
                rq->errors = -ENXIO;
                if (rq->end_io)
                        rq->end_io(rq, rq->errors);
@@ -59,8 +63,6 @@ void blk_execute_rq_nowait(struct request_queue *q, struct 
gendisk *bd_disk,
 
        rq->rq_disk = bd_disk;
        rq->end_io = done;
-       WARN_ON(irqs_disabled());
-       spin_lock_irq(q->queue_lock);
        __elv_add_request(q, rq, where, 1);
        __generic_unplug_device(q);
        /* the queue is stopped so it won't be plugged+unplugged */


-- 
Ben Hutchings
Experience is directly proportional to the value of equipment destroyed.
                                                         - Carolyn Scheppner

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to