On 04/26/2017 08:37 PM, Bart Van Assche wrote:
> Reduce the requeue delay in dm_requeue_original_request() from 5s
> to 0.5s to avoid that this delay slows down failover or failback.
> Increase the requeue delay in dm_mq_queue_rq() from 0.1s to 0.5s
> to reduce the system load if immediate requeuing has been requested
> by the dm driver.
> 
> Signed-off-by: Bart Van Assche <[email protected]>
> Cc: Hannes Reinecke <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> ---
>  drivers/md/dm-rq.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
> index 0b081d170087..c53debdcd7dc 100644
> --- a/drivers/md/dm-rq.c
> +++ b/drivers/md/dm-rq.c
> @@ -280,7 +280,7 @@ static void dm_requeue_original_request(struct 
> dm_rq_target_io *tio, bool delay_
>       if (!rq->q->mq_ops)
>               dm_old_requeue_request(rq);
>       else
> -             dm_mq_delay_requeue_request(rq, delay_requeue ? 5000 : 0);
> +             dm_mq_delay_requeue_request(rq, delay_requeue ? 500/*ms*/ : 0);
>  
>       rq_completed(md, rw, false);
>  }
> @@ -755,7 +755,7 @@ static int dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
>               /* Undo dm_start_request() before requeuing */
>               rq_end_stats(md, rq);
>               rq_completed(md, rq_data_dir(rq), false);
> -             blk_mq_delay_run_hw_queue(hctx, 100/*ms*/);
> +             blk_mq_delay_run_hw_queue(hctx, 500/*ms*/);
>               return BLK_MQ_RQ_QUEUE_BUSY;
>       }
>  
> 
Reviewed-by: Hannes Reinecke <[email protected]>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                Teamlead Storage & Networking
[email protected]                                   +49 911 74053 688
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: F. Imendörffer, J. Smithard, J. Guild, D. Upmanyu, G. Norton
HRB 21284 (AG Nürnberg)

--
dm-devel mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/dm-devel

Reply via email to