Hi Hannes and Ben's,
Thanks for your comments! We are agree with you.
It is truly have the potential risk even we trigger an event to
fresh udev database and continue to use this multipath but
it seems to be not really safe in this way.
-original-
Hi Ben,
On 2018/1/23 21:03, Benjamin
On Tue, Jan 23, 2018 at 04:57:34PM +, Bart Van Assche wrote:
> On Wed, 2018-01-24 at 00:37 +0800, Ming Lei wrote:
> > On Tue, Jan 23, 2018 at 04:24:20PM +, Bart Van Assche wrote:
> > > My opinion about this patch is as follows:
> > > * Changing a blk_mq_delay_run_hw_queue() call followed
On Tue, Jan 23, 2018 at 10:01:37PM +, Bart Van Assche wrote:
> On Wed, 2018-01-24 at 00:59 +0800, Ming Lei wrote:
> > How is that enough to fix the IO hang when driver returns STS_RESOURCE
> > and the queue is idle? If you want to follow previous dm-rq's way of
> > call
Hi Ben,
Thanks for your reply.
The purpose to update the path's udev is that if we want to call interface
sysfs_attr_set_value() to trigger uevent for this device might use the old
udev_device
If this device already received a cheng uevent and updated. If we use the not
update pp->udev in
On Wed, 2018-01-24 at 00:59 +0800, Ming Lei wrote:
> How is that enough to fix the IO hang when driver returns STS_RESOURCE
> and the queue is idle? If you want to follow previous dm-rq's way of
> call blk_mq_delay_run_hw_queue() in .queue_rq(), the same trick need
> to be applied to other drivers
Netapp requested this default setting for NetApp E-Series NVMe in addition to
"multibus".
Also, finalize the product ID regex after consulting with NetApp (the FW
reporting just
"ONTAP Controller" was beta only).
This obsoletes my previous "FIX" patch
'FIX "libmultipath: hwtable: multibus for
On Wed, 2018-01-24 at 00:37 +0800, Ming Lei wrote:
> On Tue, Jan 23, 2018 at 04:24:20PM +, Bart Van Assche wrote:
> > My opinion about this patch is as follows:
> > * Changing a blk_mq_delay_run_hw_queue() call followed by return
> > BLK_STS_DEV_RESOURCE into return BLK_STS_RESOURCE is wrong
On Wed, 2018-01-24 at 00:49 +0800, Ming Lei wrote:
> On Tue, Jan 23, 2018 at 04:47:11PM +, Bart Van Assche wrote:
> > On Wed, 2018-01-24 at 00:41 +0800, Ming Lei wrote:
> > > Could you explain where to call call_rcu()? call_rcu() can't be used in
> > > IO path at all.
> >
> > Can you explain
On Tue, Jan 23, 2018 at 04:54:02PM +, Bart Van Assche wrote:
> On Wed, 2018-01-24 at 00:49 +0800, Ming Lei wrote:
> > On Tue, Jan 23, 2018 at 04:47:11PM +, Bart Van Assche wrote:
> > > On Wed, 2018-01-24 at 00:41 +0800, Ming Lei wrote:
> > > > Could you explain where to call call_rcu()?
On Tue, Jan 23, 2018 at 04:47:11PM +, Bart Van Assche wrote:
> On Wed, 2018-01-24 at 00:41 +0800, Ming Lei wrote:
> > Could you explain where to call call_rcu()? call_rcu() can't be used in
> > IO path at all.
>
> Can you explain what makes you think that call_rcu() can't be used in the
>
On Wed, 2018-01-24 at 00:41 +0800, Ming Lei wrote:
> Could you explain where to call call_rcu()? call_rcu() can't be used in
> IO path at all.
Can you explain what makes you think that call_rcu() can't be used in the
I/O path? As you know call_rcu() invokes a function asynchronously. From
On Tue, Jan 23, 2018 at 04:24:20PM +, Bart Van Assche wrote:
> On Wed, 2018-01-24 at 00:16 +0800, Ming Lei wrote:
> > @@ -1280,10 +1282,18 @@ bool blk_mq_dispatch_rq_list(struct request_queue
> > *q, struct list_head *list,
> > * - Some but not all block drivers stop a queue
On Tue, 2018-01-23 at 10:22 +0100, Mike Snitzer wrote:
> On Thu, Jan 18 2018 at 5:20pm -0500,
> Bart Van Assche wrote:
>
> > On Thu, 2018-01-18 at 17:01 -0500, Mike Snitzer wrote:
> > > And yet Laurence cannot reproduce any such lockups with your test...
> >
> > Hmm ...
On 01/23/18 08:26, Ming Lei wrote:
On Tue, Jan 23, 2018 at 08:17:02AM -0800, Bart Van Assche wrote:
On 01/22/18 16:57, Ming Lei wrote:
Even though RCU lock is held during dispatch, preemption or interrupt
can happen too, so it is simply wrong to depend on the timing to make
sure
On Tue, Jan 23, 2018 at 08:37:26AM -0800, Bart Van Assche wrote:
> On 01/23/18 08:26, Ming Lei wrote:
> > On Tue, Jan 23, 2018 at 08:17:02AM -0800, Bart Van Assche wrote:
> > > On 01/22/18 16:57, Ming Lei wrote:
> > > > Even though RCU lock is held during dispatch, preemption or interrupt
> > > >
On Tue, Jan 23, 2018 at 08:17:02AM -0800, Bart Van Assche wrote:
>
>
> On 01/22/18 16:57, Ming Lei wrote:
> > Even though RCU lock is held during dispatch, preemption or interrupt
> > can happen too, so it is simply wrong to depend on the timing to make
> > sure __blk_mq_run_hw_queue() can see
On Wed, 2018-01-24 at 00:16 +0800, Ming Lei wrote:
> @@ -1280,10 +1282,18 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q,
> struct list_head *list,
>* - Some but not all block drivers stop a queue before
>* returning BLK_STS_RESOURCE. Two exceptions are
On Tue, Jan 23 2018 at 11:16am -0500,
Ming Lei wrote:
> This status is returned from driver to block layer if device related
> resource is run out of, but driver can guarantee that IO dispatch will
> be triggered in future when the resource is available.
>
> This patch
On 01/22/18 16:57, Ming Lei wrote:
Even though RCU lock is held during dispatch, preemption or interrupt
can happen too, so it is simply wrong to depend on the timing to make
sure __blk_mq_run_hw_queue() can see the request in this situation.
It is very unlikely that this race will ever be
On Tue, 2018-01-23 at 06:39 -0600, Benjamin Marzinski wrote:
> On Mon, Jan 22, 2018 at 11:34:15AM +0100, Martin Wilck wrote:
> > On Sat, 2018-01-20 at 02:30 +, Wuchongyun wrote:
> > > Hi Martin and Ben,
> > > Could you help to review this patch, thanks.
> > >
> > > When receiving a change
On Tue, 2018-01-23 at 09:44 -0500, Mike Snitzer wrote:
> On Tue, Jan 23 2018 at 9:27am -0500,
> Ming Lei wrote:
>
> > Hello Martin,
> >
> > On Tue, Jan 23, 2018 at 08:30:41AM -0500, Martin K. Petersen wrote:
> > >
> > > Ming,
> > >
> > > > + * Block layer and block
On Tue, Jan 23 2018 at 9:27am -0500,
Ming Lei wrote:
> Hello Martin,
>
> On Tue, Jan 23, 2018 at 08:30:41AM -0500, Martin K. Petersen wrote:
> >
> > Ming,
> >
> > > + * Block layer and block driver specific status, which is ususally
> > > returnd
> >
Hello Martin,
On Tue, Jan 23, 2018 at 08:30:41AM -0500, Martin K. Petersen wrote:
>
> Ming,
>
> > + * Block layer and block driver specific status, which is ususally returnd
> ^^^
> > + * from driver to block layer in IO
On Tue, Jan 23, 2018 at 08:30:41AM -0500, Martin K. Petersen wrote:
>
> Ming,
>
> > + * Block layer and block driver specific status, which is ususally returnd
> ^^^
> > + * from driver to block layer in IO path.
>
>
Ming,
> + * Block layer and block driver specific status, which is ususally returnd
^^^
> + * from driver to block layer in IO path.
Given that the comment blurb is long and the flag not defined until
later, it is not
Hi Ben,
On 2018/1/23 21:03, Benjamin Marzinski wrote:
> On Tue, Jan 23, 2018 at 10:52:35AM +0100, Hannes Reinecke wrote:
>> On 01/23/2018 10:25 AM, Wuchongyun wrote:
>>> Hi Christophe, Hannes, Martin, Xose and Benjamin,
>>>
>>> We meet below issue when doing the unexport/export LUN's access
On Tue, Jan 23, 2018 at 10:52:35AM +0100, Hannes Reinecke wrote:
> On 01/23/2018 10:25 AM, Wuchongyun wrote:
> > Hi Christophe, Hannes, Martin, Xose and Benjamin,
> >
> > We meet below issue when doing the unexport/export LUN's access permission
> > test, this patch is a method to fix this
On Mon, Jan 22, 2018 at 11:34:15AM +0100, Martin Wilck wrote:
> On Sat, 2018-01-20 at 02:30 +, Wuchongyun wrote:
> > Hi Martin and Ben,
> > Could you help to review this patch, thanks.
> >
> > When receiving a change uevent and calling uev_update_path, current
> > code
> > not update the
On Tue, Jan 23 2018 at 7:17am -0500,
Ming Lei wrote:
> On Tue, Jan 23, 2018 at 8:15 PM, Mike Snitzer wrote:
> > On Tue, Jan 23 2018 at 5:53am -0500,
> > Ming Lei wrote:
> >
> >> Hi Mike,
> >>
> >> On Tue, Jan 23, 2018 at
On Tue, Jan 23, 2018 at 8:15 PM, Mike Snitzer wrote:
> On Tue, Jan 23 2018 at 5:53am -0500,
> Ming Lei wrote:
>
>> Hi Mike,
>>
>> On Tue, Jan 23, 2018 at 10:22:04AM +0100, Mike Snitzer wrote:
>> > On Thu, Jan 18 2018 at 5:20pm -0500,
>> > Bart Van
On Tue, Jan 23 2018 at 5:53am -0500,
Ming Lei wrote:
> Hi Mike,
>
> On Tue, Jan 23, 2018 at 10:22:04AM +0100, Mike Snitzer wrote:
> > On Thu, Jan 18 2018 at 5:20pm -0500,
> > Bart Van Assche wrote:
> >
> > > On Thu, 2018-01-18 at 17:01 -0500,
This status is returned from driver to block layer if device related
resource is run out of, but driver can guarantee that IO dispatch will
be triggered in future when the resource is available.
This patch converts some drivers to use this return value. Meantime
if driver returns BLK_STS_RESOURCE
Hi Mike,
On Tue, Jan 23, 2018 at 10:22:04AM +0100, Mike Snitzer wrote:
> On Thu, Jan 18 2018 at 5:20pm -0500,
> Bart Van Assche wrote:
>
> > On Thu, 2018-01-18 at 17:01 -0500, Mike Snitzer wrote:
> > > And yet Laurence cannot reproduce any such lockups with your test...
Hi, Xose,
Thank you for your review work and suggestions. Attachment is the patch file
generated by git format-patch, and pasted below. Please help to submit,
thank you.
>From 091bae5fec22c61f0c3e6f9ab848fecae5203122 Mon Sep 17 00:00:00 2001
From: Tom Geng
Date: Tue,
On 01/23/2018 10:25 AM, Wuchongyun wrote:
> Hi Christophe, Hannes, Martin, Xose and Benjamin,
>
> We meet below issue when doing the unexport/export LUN's access permission
> test, this patch is a method to fix this issue.
> If it is convenient for some of you, please help to review this patch,
On Thu, Jan 18 2018 at 5:20pm -0500,
Bart Van Assche wrote:
> On Thu, 2018-01-18 at 17:01 -0500, Mike Snitzer wrote:
> > And yet Laurence cannot reproduce any such lockups with your test...
>
> Hmm ... maybe I misunderstood Laurence but I don't think that Laurence has
>
Hi Christophe, Hannes, Martin, Xose and Benjamin,
We meet below issue when doing the unexport/export LUN's access permission
test, this patch is a method to fix this issue.
If it is convenient for some of you, please help to review this patch, thanks.
fix unexport/export LUN access permission
On Mon, Jan 22, 2018 at 05:13:03PM +, Bart Van Assche wrote:
> On Mon, 2018-01-22 at 11:35 +0800, Ming Lei wrote:
> > DM-MPATH need to allocate request from underlying queue, but when the
> > allocation fails, there is no way to make underlying queue's RESTART
> > to restart DM's queue.
> >
>
On Mon, 2018-01-22 at 11:35 +0800, Ming Lei wrote:
> DM-MPATH need to allocate request from underlying queue, but when the
> allocation fails, there is no way to make underlying queue's RESTART
> to restart DM's queue.
>
> This patch introduces blk_get_request_notify() for this purpose, and
>
39 matches
Mail list logo