On Thu, May 04, 2017 at 12:55:30AM +0800, Ming Lei wrote:
> On Wed, May 03, 2017 at 09:29:36AM -0700, Omar Sandoval wrote:
> > On Fri, Apr 28, 2017 at 11:15:38PM +0800, Ming Lei wrote:
> > > When tag space of one device is big enough, we use hw tag
> > > directly for I/O scheduling.
> > > 
> > > Now the decision is made if hw queue depth is not less than
> > > q->nr_requests and the tag set isn't shared.
> > > 
> > > Signed-off-by: Ming Lei <[email protected]>
> > > ---
> > >  block/blk-mq-sched.c |  8 ++++++++
> > >  block/blk-mq-sched.h | 15 +++++++++++++++
> > >  block/blk-mq.c       | 18 +++++++++++++++++-
> > >  3 files changed, 40 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
> > > index 45a675f07b8b..4681e27c127e 100644
> > > --- a/block/blk-mq-sched.c
> > > +++ b/block/blk-mq-sched.c
> > > @@ -507,6 +507,7 @@ int blk_mq_init_sched(struct request_queue *q, struct 
> > > elevator_type *e)
> > >   struct elevator_queue *eq;
> > >   unsigned int i;
> > >   int ret;
> > > + bool auto_hw_tag;
> > >  
> > >   if (!e) {
> > >           q->elevator = NULL;
> > > @@ -519,7 +520,14 @@ int blk_mq_init_sched(struct request_queue *q, 
> > > struct elevator_type *e)
> > >    */
> > >   q->nr_requests = 2 * BLKDEV_MAX_RQ;
> > >  
> > > + auto_hw_tag = blk_mq_sched_may_use_hw_tag(q);
> > > +
> > >   queue_for_each_hw_ctx(q, hctx, i) {
> > > +         if (auto_hw_tag)
> > > +                 hctx->flags |= BLK_MQ_F_SCHED_USE_HW_TAG;
> > > +         else
> > > +                 hctx->flags &= ~BLK_MQ_F_SCHED_USE_HW_TAG;
> > > +
> > >           ret = blk_mq_sched_alloc_tags(q, hctx, i);
> > >           if (ret)
> > >                   goto err;
> > 
> > I think you should also clear the BLK_MQ_F_SCHED_USE_HW_TAG flag in
> > blk_mq_exit_sched()?
> 
> Looks not necessary since the flag is always evaluated in
> blk_mq_init_sched().

What if we're setting the scheduler to "none"? Then blk_mq_init_sched()
will go in here:

if (!e) {
        q->elevator = NULL;
        return 0;
}

Reply via email to