Hi Jianchao,
On Wed, Jan 17, 2018 at 01:24:23PM +0800, jianchao.wang wrote:
> Hi ming
>
> Thanks for your kindly response.
>
> On 01/17/2018 11:52 AM, Ming Lei wrote:
> >> It is here.
> >> __blk_mq_run_hw_queue()
> >>
> >> WARN_ON(!cpumask_test_cpu(raw_smp_processor_id(),
Hi Jens,
Any comments on this patch and the related patch set for blktrace [1] ?
Regards,
Tao
[1]: https://www.spinics.net/lists/linux-btrace/msg00790.html
On 2018/1/11 12:09, Hou Tao wrote:
> Now blktrace supports outputting cgroup info for trace action and
> trace message, however, it can
Tang,
> There is a machine with very little max_sectors_kb size:
> [root@ceph151 queue]# pwd
> /sys/block/sdd/queue
> [root@ceph151 queue]# cat max_hw_sectors_kb
> 256
> [root@ceph151 queue]# cat max_sectors_kb
> 256
>
> The performance is very low when I run big I/Os.
> I can not modify it
From: Tang Junhui
Hello Coly:
Then in bch_count_io_errors(), why did us still keep these code:
> 92 unsigned errors = atomic_add_return(1 << IO_ERROR_SHIFT,
> 93 >io_errors);
> 94 errors >>=
Hi ming
Thanks for your kindly response.
On 01/17/2018 11:52 AM, Ming Lei wrote:
>> It is here.
>> __blk_mq_run_hw_queue()
>>
>> WARN_ON(!cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask) &&
>> cpu_online(hctx->next_cpu));
> I think this warning is triggered after the CPU
Bart,
> Fixes: 4246a0b63bd8 ("block: add a bi_error field to struct bio")
> Signed-off-by: Bart Van Assche
Looks fine.
Reviewed-by: Martin K. Petersen
--
Martin K. Petersen Oracle Linux Engineering
This one is redundant and should be dropped. I ran git-format-patch
twice after a quick rebase to tweak the subject and header.
Sorry for the confusion.
On Tue, Jan 16 2018 at 11:33pm -0500,
Mike Snitzer wrote:
> From: Ming Lei
>
>
From: Ming Lei
blk_insert_cloned_request() is called in the fast path of a dm-rq driver
(e.g. blk-mq request-based DM mpath). blk_insert_cloned_request() uses
blk_mq_request_bypass_insert() to directly append the request to the
blk-mq hctx->dispatch_list of the underlying
Signed-off-by: Mike Snitzer
---
block/blk-exec.c | 2 +-
block/blk-mq-sched.c | 2 +-
block/blk-mq-sched.h | 2 +-
block/blk-mq.c | 16 +++-
4 files changed, 10 insertions(+), 12 deletions(-)
diff --git a/block/blk-exec.c b/block/blk-exec.c
index
From: Ming Lei
blk_insert_cloned_request() is called in the fast path of a dm-rq driver
(e.g. blk-mq request-based DM mpath). blk_insert_cloned_request() uses
blk_mq_request_bypass_insert() to directly append the request to the
blk-mq hctx->dispatch_list of the underlying
No functional change. Just makes code flow more logically.
In following commit, __blk_mq_try_issue_directly() will be used to
return the dispatch result (blk_status_t) to DM. DM needs this
information to improve IO merging.
Signed-off-by: Mike Snitzer
---
block/blk-mq.c |
Hi Jens,
I spent a decent amount of time going over this and am happy with it.
Hopefully you'll be too.
Thanks,
Mike
Mike Snitzer (2):
blk-mq: factor out a few helpers from __blk_mq_try_issue_directly
blk-mq-sched: remove unused 'can_block' arg from blk_mq_sched_insert_request
Ming Lei
Hi Jianchao,
On Wed, Jan 17, 2018 at 10:56:13AM +0800, jianchao.wang wrote:
> Hi ming
>
> Thanks for your patch and kindly response.
You are welcome!
>
> On 01/16/2018 11:32 PM, Ming Lei wrote:
> > OK, I got it, and it should have been the only corner case in which
> > all CPUs mapped to this
Hi ming
Thanks for your patch and kindly response.
On 01/16/2018 11:32 PM, Ming Lei wrote:
> OK, I got it, and it should have been the only corner case in which
> all CPUs mapped to this hctx become offline, and I believe the following
> patch should address this case, could you give a test?
>
On Tue, Jan 16, 2018 at 10:52 PM, Matthew Wilcox wrote:
>
> I see the improvements that Facebook have been making to the nbd driver,
> and I think that's a wonderful thing. Maybe the outcome of this topic
> is simply: "Shut up, Matthew, this is good enough".
>
> It's clear
On Wed, 2018-01-17 at 09:23 +0800, Ming Lei wrote:
> On Wed, Jan 17, 2018 at 8:03 AM, Bart Van Assche
> wrote:
> > On Tue, 2018-01-16 at 17:32 -0500, Mike Snitzer wrote:
> > > Therefore it seems to me that all queue_attr_{show,store} are racey vs
> > >
On Wed, Jan 17, 2018 at 8:03 AM, Bart Van Assche wrote:
> On Tue, 2018-01-16 at 17:32 -0500, Mike Snitzer wrote:
>> Therefore it seems to me that all queue_attr_{show,store} are racey vs
>> blk_unregister_queue() removing the 'queue' kobject.
>>
>> And it was just that
On Tue, 2018-01-16 at 06:52 -0800, Matthew Wilcox wrote:
> I see the improvements that Facebook have been making to the nbd driver,
> and I think that's a wonderful thing. Maybe the outcome of this topic
> is simply: "Shut up, Matthew, this is good enough".
>
> It's clear that there's an
On Tue, 2018-01-16 at 17:32 -0500, Mike Snitzer wrote:
> Therefore it seems to me that all queue_attr_{show,store} are racey vs
> blk_unregister_queue() removing the 'queue' kobject.
>
> And it was just that __elevator_change() was myopicly fixed to address
> the race whereas a more generic
On Tue, 2018-01-16 at 15:28 -0800, James Bottomley wrote:
> On Tue, 2018-01-16 at 18:23 -0500, Theodore Ts'o wrote:
> > On Tue, Jan 16, 2018 at 06:52:40AM -0800, Matthew Wilcox wrote:
> > >
> > >
> > > I see the improvements that Facebook have been making to the nbd
> > > driver, and I think
On Tue, 2018-01-16 at 18:23 -0500, Theodore Ts'o wrote:
> On Tue, Jan 16, 2018 at 06:52:40AM -0800, Matthew Wilcox wrote:
> >
> >
> > I see the improvements that Facebook have been making to the nbd
> > driver, and I think that's a wonderful thing. Maybe the outcome of
> > this topic is simply:
On Tue, Jan 16, 2018 at 06:52:40AM -0800, Matthew Wilcox wrote:
>
> I see the improvements that Facebook have been making to the nbd driver,
> and I think that's a wonderful thing. Maybe the outcome of this topic
> is simply: "Shut up, Matthew, this is good enough".
>
> It's clear that there's
On Tue, 2018-01-16 at 06:52 -0800, Matthew Wilcox wrote:
> I see the improvements that Facebook have been making to the nbd driver,
> and I think that's a wonderful thing. Maybe the outcome of this topic
> is simply: "Shut up, Matthew, this is good enough".
>
> It's clear that there's an
On Tue, Jan 16 2018 at 1:17pm -0500,
Bart Van Assche wrote:
> The __blk_mq_register_dev(), blk_mq_unregister_dev(),
> elv_register_queue() and elv_unregister_queue() calls need to be
> protected with sysfs_lock but other code in these functions not.
> Hence protect only
On 01/16/2018 10:10 PM, SF Markus Elfring wrote:
From: Markus Elfring
Date: Tue, 16 Jan 2018 22:00:15 +0100
Omit an extra message for a memory allocation failure in this function.
This issue was detected by using the Coccinelle software.
Signed-off-by: Markus
From: Markus Elfring
Date: Tue, 16 Jan 2018 22:00:15 +0100
Omit an extra message for a memory allocation failure in this function.
This issue was detected by using the Coccinelle software.
Signed-off-by: Markus Elfring
---
Fixes: 4246a0b63bd8 ("block: add a bi_error field to struct bio")
Signed-off-by: Bart Van Assche
---
block/bio-integrity.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/block/bio-integrity.c b/block/bio-integrity.c
index 23b42e8aa03e..9cfdd6c83b5b 100644
---
Because the hctx lock is not held around the only
blk_mq_tag_wakeup_all() call in the block layer, the wait queue
entry removal in blk_mq_dispatch_wake() is protected by the wait
queue lock only. Since the hctx->dispatch_wait entry can occur on
any of the SBQ_WAIT_QUEUES, the wait queue presence
On Mon, 2018-01-15 at 18:13 -0500, Mike Snitzer wrote:
> On Mon, Jan 15 2018 at 6:10P -0500,
> Mike Snitzer wrote:
>
> > On Mon, Jan 15 2018 at 5:51pm -0500,
> > Bart Van Assche wrote:
> >
> > > On Mon, 2018-01-15 at 17:15 -0500, Mike Snitzer
This patch does not change any functionality.
Signed-off-by: Bart Van Assche
---
block/elevator.c | 8
1 file changed, 8 insertions(+)
diff --git a/block/elevator.c b/block/elevator.c
index 4f00b53cd5fd..e87e9b43aba0 100644
--- a/block/elevator.c
+++
The __blk_mq_register_dev(), blk_mq_unregister_dev(),
elv_register_queue() and elv_unregister_queue() calls need to be
protected with sysfs_lock but other code in these functions not.
Hence protect only this code with sysfs_lock. This patch fixes a
locking inversion issue in blk_unregister_queue()
Hello Jens,
The three patches in this series are what I came up with after having
analyzed a lockdep complaint triggered by blk_unregister_queue. Please
consider these patches for kernel v4.16.
Thanks,
Bart.
Bart Van Assche (3):
block: Unexport elv_register_queue() and elv_unregister_queue()
These two functions are only called from inside the block layer so
unexport them.
Signed-off-by: Bart Van Assche
---
block/blk.h | 3 +++
block/elevator.c | 2 --
include/linux/elevator.h | 2 --
3 files changed, 3 insertions(+), 4 deletions(-)
diff
On Tue, Jan 16 2018 at 12:41pm -0500,
Jens Axboe wrote:
> On 1/16/18 10:38 AM, Mike Snitzer wrote:
> > On Tue, Jan 16 2018 at 12:20pm -0500,
> > Jens Axboe wrote:
> >
> >> On 1/16/18 8:01 AM, Mike Snitzer wrote:
> >>> From: Ming Lei
> >>>
On 1/16/18 10:38 AM, Mike Snitzer wrote:
> On Tue, Jan 16 2018 at 12:20pm -0500,
> Jens Axboe wrote:
>
>> On 1/16/18 8:01 AM, Mike Snitzer wrote:
>>> From: Ming Lei
>>>
>>> blk_insert_cloned_request() is called in fast path of dm-rq driver, and
>>> in this
On Tue, Jan 16 2018 at 12:20pm -0500,
Jens Axboe wrote:
> On 1/16/18 8:01 AM, Mike Snitzer wrote:
> > From: Ming Lei
> >
> > blk_insert_cloned_request() is called in fast path of dm-rq driver, and
> > in this function we append request to
On 1/16/18 8:01 AM, Mike Snitzer wrote:
> From: Ming Lei
>
> blk_insert_cloned_request() is called in fast path of dm-rq driver, and
> in this function we append request to hctx->dispatch_list of the underlying
> queue directly.
>
> 1) This way isn't efficient enough
On Tue, Jan 16 2018 at 10:01P -0500,
Mike Snitzer wrote:
> From: Ming Lei
>
> blk_insert_cloned_request() is called in fast path of dm-rq driver, and
> in this function we append request to hctx->dispatch_list of the underlying
> queue directly.
>
> 1)
On Tue, Jan 16, 2018 at 03:22:18PM +, Don Brace wrote:
> > -Original Message-
> > From: Laurence Oberman [mailto:lober...@redhat.com]
> > Sent: Tuesday, January 16, 2018 7:29 AM
> > To: Thomas Gleixner ; Ming Lei
> > Cc: Christoph Hellwig
On Tue, 2018-01-16 at 15:22 +, Don Brace wrote:
> > -Original Message-
> > From: Laurence Oberman [mailto:lober...@redhat.com]
> > Sent: Tuesday, January 16, 2018 7:29 AM
> > To: Thomas Gleixner ; Ming Lei > .com>
> > Cc: Christoph Hellwig
On Tue, Jan 16, 2018 at 10:31:42PM +0800, jianchao.wang wrote:
> Hi minglei
>
> On 01/16/2018 08:10 PM, Ming Lei wrote:
> >>> - next_cpu = cpumask_next(hctx->next_cpu, hctx->cpumask);
> >>> + next_cpu = cpumask_next_and(hctx->next_cpu, hctx->cpumask,
> >>> +
> -Original Message-
> From: Laurence Oberman [mailto:lober...@redhat.com]
> Sent: Tuesday, January 16, 2018 7:29 AM
> To: Thomas Gleixner ; Ming Lei
> Cc: Christoph Hellwig ; Jens Axboe ;
>
From: Ming Lei
blk_insert_cloned_request() is called in fast path of dm-rq driver, and
in this function we append request to hctx->dispatch_list of the underlying
queue directly.
1) This way isn't efficient enough because hctx lock is always required
2) With
From: Ming Lei
In the following patch, we will use blk_mq_try_issue_directly() for DM
to return the dispatch result. DM needs this information to improve
IO merging.
Signed-off-by: Ming Lei
Signed-off-by: Mike Snitzer
---
I see the improvements that Facebook have been making to the nbd driver,
and I think that's a wonderful thing. Maybe the outcome of this topic
is simply: "Shut up, Matthew, this is good enough".
It's clear that there's an appetite for userspace block devices; not for
swap devices or the root
Hi minglei
On 01/16/2018 08:10 PM, Ming Lei wrote:
>>> - next_cpu = cpumask_next(hctx->next_cpu, hctx->cpumask);
>>> + next_cpu = cpumask_next_and(hctx->next_cpu, hctx->cpumask,
>>> + cpu_online_mask);
>>> if (next_cpu >= nr_cpu_ids)
>>> -
On Tue, 2018-01-16 at 12:25 +0100, Thomas Gleixner wrote:
> On Tue, 16 Jan 2018, Ming Lei wrote:
>
> > On Mon, Jan 15, 2018 at 09:40:36AM -0800, Christoph Hellwig wrote:
> > > On Tue, Jan 16, 2018 at 12:03:43AM +0800, Ming Lei wrote:
> > > > Hi,
> > > >
> > > > These two patches fixes IO hang
On Tue, Jan 16, 2018 at 12:25:19PM +0100, Thomas Gleixner wrote:
> On Tue, 16 Jan 2018, Ming Lei wrote:
>
> > On Mon, Jan 15, 2018 at 09:40:36AM -0800, Christoph Hellwig wrote:
> > > On Tue, Jan 16, 2018 at 12:03:43AM +0800, Ming Lei wrote:
> > > > Hi,
> > > >
> > > > These two patches fixes IO
Hi Jianchao,
On Tue, Jan 16, 2018 at 06:12:09PM +0800, jianchao.wang wrote:
> Hi Ming
>
> On 01/12/2018 10:53 AM, Ming Lei wrote:
> > From: Christoph Hellwig
> >
> > The previous patch assigns interrupt vectors to all possible CPUs, so
> > now hctx can be mapped to possible CPUs,
On Tue, 16 Jan 2018, Ming Lei wrote:
> On Mon, Jan 15, 2018 at 09:40:36AM -0800, Christoph Hellwig wrote:
> > On Tue, Jan 16, 2018 at 12:03:43AM +0800, Ming Lei wrote:
> > > Hi,
> > >
> > > These two patches fixes IO hang issue reported by Laurence.
> > >
> > > 84676c1f21 ("genirq/affinity:
Hi Ming
On 01/12/2018 10:53 AM, Ming Lei wrote:
> From: Christoph Hellwig
>
> The previous patch assigns interrupt vectors to all possible CPUs, so
> now hctx can be mapped to possible CPUs, this patch applies this fact
> to simplify queue mapping & schedule so that we don't need
Ming or Christoph,
would you mind to send this patch to stable #4.12? Or is the fixes tag
enough to get this fixed in all related releases?
Regards,
Stefan
On 01/14/2018 03:42 PM, Coly Li wrote:
> Currently bcache does not handle backing device failure, if backing
> device is offline and disconnected from system, its bcache device can still
> be accessible. If the bcache device is in writeback mode, I/O requests even
> can success if the requests hit
On 01/14/2018 03:42 PM, Coly Li wrote:
> If a bcache device is configured to writeback mode, current code does not
> handle write I/O errors on backing devices properly.
>
> In writeback mode, write request is written to cache device, and
> latter being flushed to backing device. If I/O failed
On 16/01/2018 5:05 PM, Hannes Reinecke wrote:
> On 01/14/2018 03:42 PM, Coly Li wrote:
>> Kernel thread routine bch_allocator_thread() references macro
>> allocator_wait() to wait for a condition or quit to do_exit()
>> when kthread_should_stop() is true. Here is the code block,
>>
>> 284
On 01/14/2018 03:42 PM, Coly Li wrote:
> In order to catch I/O error of backing device, a separate bi_end_io
> call back is required. Then a per backing device counter can record I/O
> errors number and retire the backing device if the counter reaches a
> per backing device I/O error limit.
>
>
On 01/14/2018 03:42 PM, Coly Li wrote:
> From: Tang Junhui
>
> When we run IO in a detached device, and run iostat to shows IO status,
> normally it will show like bellow (Omitted some fields):
> Device: ... avgrq-sz avgqu-sz await r_await w_await svctm %util
> sdd
On 01/14/2018 03:42 PM, Coly Li wrote:
> In patch "bcache: fix cached_dev->count usage for bch_cache_set_error()",
> cached_dev_get() is called when creating dc->writeback_thread, and
> cached_dev_put() is called when exiting dc->writeback_thread. This
> modification works well unless people
On 01/14/2018 03:42 PM, Coly Li wrote:
> Kernel thread routine bch_allocator_thread() references macro
> allocator_wait() to wait for a condition or quit to do_exit()
> when kthread_should_stop() is true. Here is the code block,
>
> 284 while (1) {
On 01/14/2018 03:42 PM, Coly Li wrote:
> dc->writeback_rate_update_seconds can be set via sysfs and its value can
> be set to [1, ULONG_MAX]. It does not make sense to set such a large
> value, 60 seconds is long enough value considering the default 5 seconds
> works well for long time.
>
>
On 01/14/2018 03:42 PM, Coly Li wrote:
> Kernel thread routine bch_writeback_thread() has the following code block,
>
> 447 down_write(>writeback_lock);
> 448~450 if (check conditions) {
> 451 up_write(>writeback_lock);
> 452
On Mon, Jan 15, 2018 at 10:07:38AM -0500, Mike Snitzer wrote:
> > See also:
> > https://www.redhat.com/archives/dm-devel/2017-March/msg00213.html
> > https://www.redhat.com/archives/dm-devel/2017-March/msg00226.html
>
> Right, now that you mention it it is starting to ring a bell (especially
>
62 matches
Mail list logo