4.8-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Bart Van Assche <[email protected]>

commit 3b785fbcf81c3533772c52b717f77293099498d3 upstream.

This avoids that new requests are queued while __dm_destroy() is in
progress.

Signed-off-by: Bart Van Assche <[email protected]>
Signed-off-by: Mike Snitzer <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
 drivers/md/dm.c |    5 +++++
 1 file changed, 5 insertions(+)

--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1873,6 +1873,7 @@ EXPORT_SYMBOL_GPL(dm_device_name);
 
 static void __dm_destroy(struct mapped_device *md, bool wait)
 {
+       struct request_queue *q = dm_get_md_queue(md);
        struct dm_table *map;
        int srcu_idx;
 
@@ -1883,6 +1884,10 @@ static void __dm_destroy(struct mapped_d
        set_bit(DMF_FREEING, &md->flags);
        spin_unlock(&_minor_lock);
 
+       spin_lock_irq(q->queue_lock);
+       queue_flag_set(QUEUE_FLAG_DYING, q);
+       spin_unlock_irq(q->queue_lock);
+
        if (dm_request_based(md) && md->kworker_task)
                flush_kthread_worker(&md->kworker);
 


Reply via email to