On 10/05/2016 03:49 PM, Ming Lei wrote:
We can use srcu read lock for BLOCKING and rcu read lock for non-BLOCKING,
by putting *_read_lock() and *_read_unlock() into two wrappers, which
should minimize the cost of srcu read lock & unlock and the code is still easy
to read & verify.
Hello Ming,
On Thu, Oct 6, 2016 at 5:08 AM, Bart Van Assche
wrote:
> On 10/05/2016 12:11 PM, Sagi Grimberg wrote:
>>
>> I was referring to weather we can take srcu in the submission path
>> conditional of the hctx being STOPPED?
>
>
> Hello Sagi,
>
> Regarding run-time overhead:
> * rcu_read_lock() is a no-op
On 10/05/2016 12:11 PM, Sagi Grimberg wrote:
I was referring to weather we can take srcu in the submission path
conditional of the hctx being STOPPED?
Hello Sagi,
Regarding run-time overhead:
* rcu_read_lock() is a no-op on CONFIG_PREEMPT_NONE kernels and is
translated into preempt_disable()
Hello Ming,
Can you have a look at the attached patch? That patch uses an srcu read
lock for all queue types, whether or not the BLK_MQ_F_BLOCKING flag has
been set. Additionally, I have dropped the QUEUE_FLAG_QUIESCING flag.
Just like previous versions, this patch has been tested.
Hey Bart,
On 10/05/2016 11:14 AM, Sagi Grimberg wrote:
Hello Ming,
Can you have a look at the attached patch? That patch uses an srcu read
lock for all queue types, whether or not the BLK_MQ_F_BLOCKING flag has
been set. Additionally, I have dropped the QUEUE_FLAG_QUIESCING flag.
Just like previous versio
Hello Ming,
Can you have a look at the attached patch? That patch uses an srcu read
lock for all queue types, whether or not the BLK_MQ_F_BLOCKING flag has
been set. Additionally, I have dropped the QUEUE_FLAG_QUIESCING flag.
Just like previous versions, this patch has been tested.
Hey Bart,
On Wed, Oct 5, 2016 at 10:46 PM, Bart Van Assche
wrote:
> On 10/04/16 21:32, Ming Lei wrote:
>>
>> On Wed, Oct 5, 2016 at 12:16 PM, Bart Van Assche
>> wrote:
>>>
>>> On 10/01/16 15:56, Ming Lei wrote:
If we just call the rcu/srcu read lock(or the mutex) around .queue_rq(),
the
On 10/04/16 21:32, Ming Lei wrote:
On Wed, Oct 5, 2016 at 12:16 PM, Bart Van Assche
wrote:
On 10/01/16 15:56, Ming Lei wrote:
If we just call the rcu/srcu read lock(or the mutex) around .queue_rq(),
the above code needn't to be duplicated any more.
Can you have a look at the attached patch?
On Wed, Oct 5, 2016 at 12:16 PM, Bart Van Assche
wrote:
> On 10/01/16 15:56, Ming Lei wrote:
>>
>> If we just call the rcu/srcu read lock(or the mutex) around .queue_rq(),
>> the
>> above code needn't to be duplicated any more.
>
>
> Hello Ming,
>
> Can you have a look at the attached patch? That
On 10/01/16 15:56, Ming Lei wrote:
If we just call the rcu/srcu read lock(or the mutex) around .queue_rq(), the
above code needn't to be duplicated any more.
Hello Ming,
Can you have a look at the attached patch? That patch uses an srcu read
lock for all queue types, whether or not the BLK_MQ
On Fri, Sep 30, 2016 at 11:55 PM, Bart Van Assche
wrote:
> On 09/29/16 14:51, Ming Lei wrote:
>>
>> On Thu, Sep 29, 2016 at 7:59 AM, Bart Van Assche
>> wrote:
>>>
>>> blk_quiesce_queue() prevents that new queue_rq() invocations
>>
>>
>> blk_mq_quiesce_queue()
>
>
> Thanks, I will update the patch
On 09/29/16 14:51, Ming Lei wrote:
On Thu, Sep 29, 2016 at 7:59 AM, Bart Van Assche
wrote:
blk_quiesce_queue() prevents that new queue_rq() invocations
blk_mq_quiesce_queue()
Thanks, I will update the patch title and patch description.
+void blk_mq_quiesce_queue(struct request_queue *q)
+
On Thu, Sep 29, 2016 at 7:59 AM, Bart Van Assche
wrote:
> blk_quiesce_queue() prevents that new queue_rq() invocations
blk_mq_quiesce_queue()
> occur and waits until ongoing invocations have finished. This
> function does *not* wait until all outstanding requests have
I guess it still may wait
On 09/29/2016 01:59 AM, Bart Van Assche wrote:
> blk_quiesce_queue() prevents that new queue_rq() invocations
> occur and waits until ongoing invocations have finished. This
> function does *not* wait until all outstanding requests have
> finished (this means invocation of request.end_io()).
> blk_
blk_quiesce_queue() prevents that new queue_rq() invocations
occur and waits until ongoing invocations have finished. This
function does *not* wait until all outstanding requests have
finished (this means invocation of request.end_io()).
blk_resume_queue() resumes normal I/O processing.
Signed-off
15 matches
Mail list logo