Re: [PATCHSET v5] blk-mq: reimplement timeout handling
On Sat, Jan 13, 2018 at 10:45:14PM +0800, Ming Lei wrote: > On Fri, Jan 12, 2018 at 04:55:34PM -0500, Laurence Oberman wrote: > > On Fri, 2018-01-12 at 20:57 +, Bart Van Assche wrote: > > > On Tue, 2018-01-09 at 08:29 -0800, Tejun Heo wrote: > > > > Currently, blk-mq timeout path synchronizes against the usual > > > > issue/completion path using a complex scheme involving atomic > > > > bitflags, REQ_ATOM_*, memory barriers and subtle memory coherence > > > > rules. Unfortunatley, it contains quite a few holes. > > > > > > Hello Tejun, > > > > > > With this patch series applied I see weird hangs in blk_mq_get_tag() > > > when I > > > run the srp-test software. If I pull Jens' latest for-next branch and > > > revert > > > this patch series then the srp-test software runs successfully. Note: > > > if you > > > don't have InfiniBand hardware available then you will need the > > > RDMA/CM > > > patches for the SRP initiator and target drivers that have been > > > posted > > > recently on the linux-rdma mailing list to run the srp-test software. > > > > > > This is how I run the srp-test software in a VM: > > > > > > ./run_tests -c -d -r 10 > > > > > > Here is an example of what SysRq-w reported when the hang occurred: > > > > > > sysrq: SysRq : Show Blocked State > > > taskPC stack pid father > > > kworker/u8:0D12864 5 2 0x8000 > > > Workqueue: events_unbound sd_probe_async [sd_mod] > > > Call Trace: > > > ? __schedule+0x2b4/0xbb0 > > > schedule+0x2d/0x90 > > > io_schedule+0xd/0x30 > > > blk_mq_get_tag+0x169/0x290 > > > ? finish_wait+0x80/0x80 > > > blk_mq_get_request+0x16a/0x4f0 > > > blk_mq_alloc_request+0x59/0xc0 > > > blk_get_request_flags+0x3f/0x260 > > > scsi_execute+0x33/0x1e0 [scsi_mod] > > > read_capacity_16.part.35+0x9c/0x460 [sd_mod] > > > sd_revalidate_disk+0x14bb/0x1cb0 [sd_mod] > > > sd_probe_async+0xf2/0x1a0 [sd_mod] > > > process_one_work+0x21c/0x6d0 > > > worker_thread+0x35/0x380 > > > ? process_one_work+0x6d0/0x6d0 > > > kthread+0x117/0x130 > > > ? kthread_create_worker_on_cpu+0x40/0x40 > > > ret_from_fork+0x24/0x30 > > > systemd-udevd D13672 1048285 0x0100 > > > Call Trace: > > > ? __schedule+0x2b4/0xbb0 > > > schedule+0x2d/0x90 > > > io_schedule+0xd/0x30 > > > generic_file_read_iter+0x32f/0x970 > > > ? page_cache_tree_insert+0x100/0x100 > > > __vfs_read+0xcc/0x120 > > > vfs_read+0x96/0x140 > > > SyS_read+0x40/0xa0 > > > do_syscall_64+0x5f/0x1b0 > > > entry_SYSCALL64_slow_path+0x25/0x25 > > > RIP: 0033:0x7f8ce6d08d11 > > > RSP: 002b:7fff96dec288 EFLAGS: 0246 ORIG_RAX: > > > > > > RAX: ffda RBX: 5651de7f6e10 RCX: 7f8ce6d08d11 > > > RDX: 0040 RSI: 5651de7f6e38 RDI: 0007 > > > RBP: 5651de7ea500 R08: 7f8ce6cf1c20 R09: 5651de7f6e10 > > > R10: 006f R11: 0246 R12: 01ff > > > R13: 01ff0040 R14: 5651de7ea550 R15: 0040 > > > systemd-udevd D13496 1049285 0x0100 > > > Call Trace: > > > ? __schedule+0x2b4/0xbb0 > > > schedule+0x2d/0x90 > > > io_schedule+0xd/0x30 > > > blk_mq_get_tag+0x169/0x290 > > > ? finish_wait+0x80/0x80 > > > blk_mq_get_request+0x16a/0x4f0 > > > blk_mq_make_request+0x105/0x8e0 > > > ? generic_make_request+0xd6/0x3d0 > > > generic_make_request+0x103/0x3d0 > > > ? submit_bio+0x57/0x110 > > > submit_bio+0x57/0x110 > > > mpage_readpages+0x13b/0x160 > > > ? I_BDEV+0x10/0x10 > > > ? rcu_read_lock_sched_held+0x66/0x70 > > > ? __alloc_pages_nodemask+0x2e8/0x360 > > > __do_page_cache_readahead+0x2a4/0x370 > > > ? force_page_cache_readahead+0xaf/0x110 > > > force_page_cache_readahead+0xaf/0x110 > > > generic_file_read_iter+0x743/0x970 > > > ? find_held_lock+0x2d/0x90 > > > ? _raw_spin_unlock+0x29/0x40 > > > __vfs_read+0xcc/0x120 > > > vfs_read+0x96/0x140 > > > SyS_read+0x40/0xa0 > > > do_syscall_64+0x5f/0x1b0 > > > entry_SYSCALL64_slow_path+0x25/0x25 > > > RIP: 0033:0x7f8ce6d08d11 > > > RSP: 002b:7fff96dec8b8 EFLAGS: 0246 ORIG_RAX: > > > > > > RAX: ffda RBX: 7f8ce7085010 RCX: 7f8ce6d08d11 > > > RDX: 0004 RSI: 7f8ce7085038 RDI: 000f > > > RBP: 5651de7ec840 R08: R09: 7f8ce7085010 > > > R10: 7f8ce7085028 R11: 0246 R12: > > > R13: 0004 R14: 5651de7ec890 R15: 0004 > > > systemd-udevd D13672 1055285 0x0100 > > > Call Trace: > > > ? __schedule+0x2b4/0xbb0 > > > schedule+0x2d/0x90 > > > io_schedule+0xd/0x30 > > > blk_mq_get_tag+0x169/0x290 > > > ? finish_wait+0x80/0x80 > > > blk_mq_get_request+0x16a/0x4f0 > > > blk_mq_make_request+0x105/0x8e0 > > > ? generic_make_request+0xd6/0x3d0 > > > generic_make_request+0x103/0x3d0 > > > ? submit_bio+0x57/0x110 > > > submit_bio+0x57/0x110 > > > mpage_readpages+0x13b/0x160 > > > ? I_BDEV+0x10/0x10 > > > ? rcu_read_lock_sched_held+0x66/0x70 > > > ?
[PATCH] blk-mq: don't clear RQF_MQ_INFLIGHT in blk_mq_rq_ctx_init()
In case of no IO scheduler, RQF_MQ_INFLIGHT is set in blk_mq_rq_ctx_init(), but 7c3fb70f0341 clears it mistakenly, so fix it. This patch fixes systemd-udevd hang when starting multipathd service: [ 914.409660] systemd-journald[213]: Successfully sent stream file descriptor to service manager. [ 984.028104] INFO: task systemd-udevd:1518 blocked for more than 120 seconds. [ 984.030916] Not tainted 4.15.0-rc6+ #62 [ 984.032709] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 984.035818] systemd-udevd D0 1518232 0x8006 [ 984.035826] Call Trace: [ 984.035846] ? __schedule+0x626/0x759 [ 984.035855] schedule+0x88/0x9b [ 984.035861] io_schedule+0x12/0x33 [ 984.035867] __lock_page+0xef/0x127 [ 984.035874] ? add_to_page_cache_lru+0xd4/0xd4 [ 984.035889] truncate_inode_pages_range+0x533/0x62f [ 984.035929] ? cpumask_next+0x17/0x18 [ 984.035937] ? cpumask_next+0x17/0x18 [ 984.035943] ? smp_call_function_many+0x1e4/0x218 [ 984.035949] ? __find_get_block+0x2ba/0x2ba [ 984.035961] ? __find_get_block+0x216/0x2ba [ 984.035977] ? on_each_cpu_cond+0xbf/0x143 [ 984.035992] __blkdev_put+0x69/0x1a5 [ 984.036009] blkdev_close+0x21/0x24 [ 984.036025] __fput+0xdb/0x18a [ 984.036039] task_work_run+0x6f/0x80 [ 984.036048] do_exit+0x452/0x96b [ 984.036055] ? preempt_count_add+0x7a/0x8c [ 984.036061] do_group_exit+0x3c/0x98 [ 984.036067] get_signal+0x47c/0x4fd [ 984.036076] do_signal+0x36/0x51f [ 984.036090] exit_to_usermode_loop+0x3a/0x73 [ 984.036096] syscall_return_slowpath+0x97/0xcf [ 984.036103] entry_SYSCALL_64_fastpath+0x7b/0x7d [ 984.036109] RIP: 0033:0x7f2ffce94540 [ 984.036112] RSP: 002b:7d1cb888 EFLAGS: 0246 ORIG_RAX: [ 984.036117] RAX: fffc RBX: 55d9840e72d0 RCX: 7f2ffce94540 [ 984.036120] RDX: 0018 RSI: 55d9840e72f8 RDI: 0007 [ 984.036122] RBP: 55d984111860 R08: 55d9840e72d0 R09: 0092 [ 984.036125] R10: 7f2ffce7ebb8 R11: 0246 R12: 0018 [ 984.036127] R13: 00020014e200 R14: 55d9841118b0 R15: 55d9840e72e8 [ 1034.410125] systemd-journald[213]: Sent WATCHDOG=1 notification. Cc: Laurence ObermanCc: Mike Snitzer Fixes: 7c3fb70f0341 ("block: rearrange a few request fields for better cache layout") Signed-off-by: Ming Lei --- block/blk-mq.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index ef9beca2d117..c1c74d891ce7 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -269,13 +269,14 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, { struct blk_mq_tags *tags = blk_mq_tags_from_data(data); struct request *rq = tags->static_rqs[tag]; + bool inflight = false; if (data->flags & BLK_MQ_REQ_INTERNAL) { rq->tag = -1; rq->internal_tag = tag; } else { if (blk_mq_tag_busy(data->hctx)) { - rq->rq_flags = RQF_MQ_INFLIGHT; + inflight = true; atomic_inc(>hctx->nr_active); } rq->tag = tag; @@ -286,7 +287,7 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, /* csd/requeue_work/fifo_time is initialized before use */ rq->q = data->q; rq->mq_ctx = data->ctx; - rq->rq_flags = 0; + rq->rq_flags = inflight ? RQF_MQ_INFLIGHT : 0; rq->cpu = -1; rq->cmd_flags = op; if (data->flags & BLK_MQ_REQ_PREEMPT) -- 2.9.5
[PATCH v2 12/12] bcache: stop bcache device when backing device is offline
Currently bcache does not handle backing device failure, if backing device is offline and disconnected from system, its bcache device can still be accessible. If the bcache device is in writeback mode, I/O requests even can success if the requests hit on cache device. That is to say, when and how bcache handles offline backing device is undefined. This patch tries to handle backing device offline in a rather simple way, - Add cached_dev->status_update_thread kernel thread to update backing device status in every 1 second. - Add cached_dev->offline_seconds to record how many seconds the backing device is observed to be offline. If the backing device is offline for BACKING_DEV_OFFLINE_TIMEOUT (30) seconds, set dc->io_disable to 1 and call bcache_device_stop() to stop the bache device which linked to the offline backing device. Now if a backing device is offline for BACKING_DEV_OFFLINE_TIMEOUT seconds, its bcache device will be removed, then user space application writing on it will get error immediately, and handler the device failure in time. This patch is quite simple, does not handle more complicated situations. Once the bcache device is stopped, users need to recovery the backing device, register and attach it manually. Changelog: v2: this is new added in v2 patch set. Signed-off-by: Coly LiCc: Michael Lyle Cc: Hannes Reinecke Cc: Junhui Tang --- drivers/md/bcache/bcache.h | 2 ++ drivers/md/bcache/super.c | 55 ++ 2 files changed, 57 insertions(+) diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h index 5a811959392d..9eedb35d01bc 100644 --- a/drivers/md/bcache/bcache.h +++ b/drivers/md/bcache/bcache.h @@ -338,6 +338,7 @@ struct cached_dev { struct keybuf writeback_keys; + struct task_struct *status_update_thread; /* * Order the write-half of writeback operations strongly in dispatch * order. (Maintain LBA order; don't allow reads completing out of @@ -384,6 +385,7 @@ struct cached_dev { #define DEFAULT_CACHED_DEV_ERROR_LIMIT 64 atomic_tio_errors; unsignederror_limit; + unsignedoffline_seconds; }; enum alloc_reserve { diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index 14fce3623770..85adf1e29d11 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -646,6 +646,11 @@ static int ioctl_dev(struct block_device *b, fmode_t mode, unsigned int cmd, unsigned long arg) { struct bcache_device *d = b->bd_disk->private_data; + struct cached_dev *dc = container_of(d, struct cached_dev, disk); + + if (dc->io_disable) + return -EIO; + return d->ioctl(d, mode, cmd, arg); } @@ -856,6 +861,45 @@ static void calc_cached_dev_sectors(struct cache_set *c) c->cached_dev_sectors = sectors; } +#define BACKING_DEV_OFFLINE_TIMEOUT 5 +static int cached_dev_status_update(void *arg) +{ + struct cached_dev *dc = arg; + struct request_queue *q; + char buf[BDEVNAME_SIZE]; + + /* +* If this delayed worker is stopping outside, directly quit here. +* dc->io_disable might be set via sysfs interface, so check it +* here too. +*/ + while (!kthread_should_stop() && !dc->io_disable) { + q = bdev_get_queue(dc->bdev); + if (blk_queue_dying(q)) + dc->offline_seconds++; + else + dc->offline_seconds = 0; + + if (dc->offline_seconds >= BACKING_DEV_OFFLINE_TIMEOUT) { + pr_err("%s: device offline for %d seconds", + bdevname(dc->bdev, buf), + BACKING_DEV_OFFLINE_TIMEOUT); + pr_err("%s: disable I/O request due to backing " + "device offline", dc->disk.name); + dc->io_disable = true; + /* let others know earlier that io_disable is true */ + smp_mb(); + bcache_device_stop(>disk); + break; + } + + schedule_timeout_interruptible(HZ); + } + + dc->status_update_thread = NULL; + return 0; +} + void bch_cached_dev_run(struct cached_dev *dc) { struct bcache_device *d = >disk; @@ -898,6 +942,15 @@ void bch_cached_dev_run(struct cached_dev *dc) if (sysfs_create_link(>kobj, _to_dev(d->disk)->kobj, "dev") || sysfs_create_link(_to_dev(d->disk)->kobj, >kobj, "bcache")) pr_debug("error creating sysfs link"); + + dc->status_update_thread = kthread_run(cached_dev_status_update, + dc, +
[PATCH v2 12/12] bcache: stop bcache device when backing device is offline
Currently bcache does not handle backing device failure, if backing device is offline and disconnected from system, its bcache device can still be accessible. If the bcache device is in writeback mode, I/O requests even can success if the requests hit on cache device. That is to say, when and how bcache handles offline backing device is undefined. This patch tries to handle backing device offline in a rather simple way, - Add cached_dev->status_update_thread kernel thread to update backing device status in every 1 second. - Add cached_dev->offline_seconds to record how many seconds the backing device is observed to be offline. If the backing device is offline for BACKING_DEV_OFFLINE_TIMEOUT (30) seconds, set dc->io_disable to 1 and call bcache_device_stop() to stop the bache device which linked to the offline backing device. Now if a backing device is offline for BACKING_DEV_OFFLINE_TIMEOUT seconds, its bcache device will be removed, then user space application writing on it will get error immediately, and handler the device failure in time. This patch is quite simple, does not handle more complicated situations. Once the bcache device is stopped, users need to recovery the backing device, register and attach it manually. Changelog: v2: this is new added in v2 patch set. Signed-off-by: Coly LiCc: Michael Lyle Cc: Hannes Reinecke Cc: Junhui Tang --- drivers/md/bcache/bcache.h | 2 ++ drivers/md/bcache/super.c | 55 ++ 2 files changed, 57 insertions(+) diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h index 5a811959392d..9eedb35d01bc 100644 --- a/drivers/md/bcache/bcache.h +++ b/drivers/md/bcache/bcache.h @@ -338,6 +338,7 @@ struct cached_dev { struct keybuf writeback_keys; + struct task_struct *status_update_thread; /* * Order the write-half of writeback operations strongly in dispatch * order. (Maintain LBA order; don't allow reads completing out of @@ -384,6 +385,7 @@ struct cached_dev { #define DEFAULT_CACHED_DEV_ERROR_LIMIT 64 atomic_tio_errors; unsignederror_limit; + unsignedoffline_seconds; }; enum alloc_reserve { diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index 14fce3623770..85adf1e29d11 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -646,6 +646,11 @@ static int ioctl_dev(struct block_device *b, fmode_t mode, unsigned int cmd, unsigned long arg) { struct bcache_device *d = b->bd_disk->private_data; + struct cached_dev *dc = container_of(d, struct cached_dev, disk); + + if (dc->io_disable) + return -EIO; + return d->ioctl(d, mode, cmd, arg); } @@ -856,6 +861,45 @@ static void calc_cached_dev_sectors(struct cache_set *c) c->cached_dev_sectors = sectors; } +#define BACKING_DEV_OFFLINE_TIMEOUT 5 +static int cached_dev_status_update(void *arg) +{ + struct cached_dev *dc = arg; + struct request_queue *q; + char buf[BDEVNAME_SIZE]; + + /* +* If this delayed worker is stopping outside, directly quit here. +* dc->io_disable might be set via sysfs interface, so check it +* here too. +*/ + while (!kthread_should_stop() && !dc->io_disable) { + q = bdev_get_queue(dc->bdev); + if (blk_queue_dying(q)) + dc->offline_seconds++; + else + dc->offline_seconds = 0; + + if (dc->offline_seconds >= BACKING_DEV_OFFLINE_TIMEOUT) { + pr_err("%s: device offline for %d seconds", + bdevname(dc->bdev, buf), + BACKING_DEV_OFFLINE_TIMEOUT); + pr_err("%s: disable I/O request due to backing " + "device offline", dc->disk.name); + dc->io_disable = true; + /* let others know earlier that io_disable is true */ + smp_mb(); + bcache_device_stop(>disk); + break; + } + + schedule_timeout_interruptible(HZ); + } + + dc->status_update_thread = NULL; + return 0; +} + void bch_cached_dev_run(struct cached_dev *dc) { struct bcache_device *d = >disk; @@ -898,6 +942,15 @@ void bch_cached_dev_run(struct cached_dev *dc) if (sysfs_create_link(>kobj, _to_dev(d->disk)->kobj, "dev") || sysfs_create_link(_to_dev(d->disk)->kobj, >kobj, "bcache")) pr_debug("error creating sysfs link"); + + dc->status_update_thread = kthread_run(cached_dev_status_update, + dc, +
[PATCH v2 10/12] bcache: add backing_request_endio() for bi_end_io of attached backing device I/O
In order to catch I/O error of backing device, a separate bi_end_io call back is required. Then a per backing device counter can record I/O errors number and retire the backing device if the counter reaches a per backing device I/O error limit. This patch adds backing_request_endio() to bcache backing device I/O code path, this is a preparation for further complicated backing device failure handling. So far there is no real code logic change, I make this change a separate patch to make sure it is stable and reliable for further work. Changelog: v2: indeed this is new added in this patch set. Signed-off-by: Coly LiCc: Junhui Tang Cc: Michael Lyle --- drivers/md/bcache/request.c | 95 +++ drivers/md/bcache/super.c | 1 + drivers/md/bcache/writeback.c | 1 + 3 files changed, 81 insertions(+), 16 deletions(-) diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c index e09c5ae745be..ad4cf71f7eab 100644 --- a/drivers/md/bcache/request.c +++ b/drivers/md/bcache/request.c @@ -139,6 +139,7 @@ static void bch_data_invalidate(struct closure *cl) } op->insert_data_done = true; + /* get in bch_data_insert() */ bio_put(bio); out: continue_at(cl, bch_data_insert_keys, op->wq); @@ -630,6 +631,38 @@ static void request_endio(struct bio *bio) closure_put(cl); } +static void backing_request_endio(struct bio *bio) +{ + struct closure *cl = bio->bi_private; + + if (bio->bi_status) { + struct search *s = container_of(cl, struct search, cl); + /* +* If a bio has REQ_PREFLUSH for writeback mode, it is +* speically assembled in cached_dev_write() for a non-zero +* write request which has REQ_PREFLUSH. we don't set +* s->iop.status by this failure, the status will be decided +* by result of bch_data_insert() operation. +*/ + if (unlikely(s->iop.writeback && +bio->bi_opf & REQ_PREFLUSH)) { + char buf[BDEVNAME_SIZE]; + + bio_devname(bio, buf); + pr_err("Can't flush %s: returned bi_status %i", + buf, bio->bi_status); + } else { + /* set to orig_bio->bi_status in bio_complete() */ + s->iop.status = bio->bi_status; + } + s->recoverable = false; + /* should count I/O error for backing device here */ + } + + bio_put(bio); + closure_put(cl); +} + static void bio_complete(struct search *s) { if (s->orig_bio) { @@ -644,13 +677,21 @@ static void bio_complete(struct search *s) } } -static void do_bio_hook(struct search *s, struct bio *orig_bio) +static void do_bio_hook(struct search *s, + struct bio *orig_bio, + bio_end_io_t *end_io_fn) { struct bio *bio = >bio.bio; bio_init(bio, NULL, 0); __bio_clone_fast(bio, orig_bio); - bio->bi_end_io = request_endio; + /* +* bi_end_io can be set separately somewhere else, e.g. the +* variants in, +* - cache_bio->bi_end_io from cached_dev_cache_miss() +* - n->bi_end_io from cache_lookup_fn() +*/ + bio->bi_end_io = end_io_fn; bio->bi_private = >cl; bio_cnt_set(bio, 3); @@ -676,7 +717,7 @@ static inline struct search *search_alloc(struct bio *bio, s = mempool_alloc(d->c->search, GFP_NOIO); closure_init(>cl, NULL); - do_bio_hook(s, bio); + do_bio_hook(s, bio, request_endio); s->orig_bio = bio; s->cache_miss = NULL; @@ -743,10 +784,11 @@ static void cached_dev_read_error(struct closure *cl) trace_bcache_read_retry(s->orig_bio); s->iop.status = 0; - do_bio_hook(s, s->orig_bio); + do_bio_hook(s, s->orig_bio, backing_request_endio); /* XXX: invalidate cache */ + /* I/O request sent to backing device */ closure_bio_submit(s->iop.c, bio, cl); } @@ -859,7 +901,7 @@ static int cached_dev_cache_miss(struct btree *b, struct search *s, bio_copy_dev(cache_bio, miss); cache_bio->bi_iter.bi_size = s->insert_bio_sectors << 9; - cache_bio->bi_end_io= request_endio; + cache_bio->bi_end_io= backing_request_endio; cache_bio->bi_private = >cl; bch_bio_map(cache_bio, NULL); @@ -872,14 +914,16 @@ static int cached_dev_cache_miss(struct btree *b, struct search *s, s->cache_miss = miss; s->iop.bio = cache_bio; bio_get(cache_bio); + /* I/O request sent to backing device */
[PATCH v2 06/12] bcache: set error_limit correctly
Struct cache uses io_errors for two purposes, - Error decay: when cache set error_decay is set, io_errors is used to generate a small piece of delay when I/O error happens. - I/O errors counter: in order to generate big enough value for error decay, I/O errors counter value is stored by left shifting 20 bits (a.k.a IO_ERROR_SHIFT). In function bch_count_io_errors(), if I/O errors counter reaches cache set error limit, bch_cache_set_error() will be called to retire the whold cache set. But current code is problematic when checking the error limit, see the following code piece from bch_count_io_errors(), 90 if (error) { 91 char buf[BDEVNAME_SIZE]; 92 unsigned errors = atomic_add_return(1 << IO_ERROR_SHIFT, 93 >io_errors); 94 errors >>= IO_ERROR_SHIFT; 95 96 if (errors < ca->set->error_limit) 97 pr_err("%s: IO error on %s, recovering", 98bdevname(ca->bdev, buf), m); 99 else 100 bch_cache_set_error(ca->set, 101 "%s: too many IO errors %s", 102 bdevname(ca->bdev, buf), m); 103 } At line 94, errors is right shifting IO_ERROR_SHIFT bits, now it is real errors counter to compare at line 96. But ca->set->error_limit is initia- lized with an amplified value in bch_cache_set_alloc(), 1545 c->error_limit = 8 << IO_ERROR_SHIFT; It means by default, in bch_count_io_errors(), before 8<<20 errors happened bch_cache_set_error() won't be called to retire the problematic cache device. If the average request size is 64KB, it means bcache won't handle failed device until 512GB data is requested. This is too large to be an I/O threashold. So I believe the correct error limit should be much less. This patch sets default cache set error limit to 8, then in bch_count_io_errors() when errors counter reaches 8 (if it is default value), function bch_cache_set_error() will be called to retire the whole cache set. This patch also removes bits shifting when store or show io_error_limit value via sysfs interface. Nowadays most of SSDs handle internal flash failure automatically by LBA address re-indirect mapping. If an I/O error can be observed by upper layer code, it will be a notable error because that SSD can not re-indirect map the problematic LBA address to an available flash block. This situation indicates the whole SSD will be failed very soon. Therefore setting 8 as the default io error limit value makes sense, it is enough for most of cache devices. Changelog: v2: add reviewed-by from Hannes. v1: initial version for review. Signed-off-by: Coly LiReviewed-by: Hannes Reinecke Cc: Michael Lyle Cc: Junhui Tang --- drivers/md/bcache/bcache.h | 1 + drivers/md/bcache/super.c | 2 +- drivers/md/bcache/sysfs.c | 4 ++-- 3 files changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h index 88d938c8d027..7d7512fa4f09 100644 --- a/drivers/md/bcache/bcache.h +++ b/drivers/md/bcache/bcache.h @@ -663,6 +663,7 @@ struct cache_set { ON_ERROR_UNREGISTER, ON_ERROR_PANIC, } on_error; +#define DEFAULT_IO_ERROR_LIMIT 8 unsignederror_limit; unsignederror_decay; diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index 6d888e8fea8c..a373648b5d4b 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -1583,7 +1583,7 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb) c->congested_read_threshold_us = 2000; c->congested_write_threshold_us = 2; - c->error_limit = 8 << IO_ERROR_SHIFT; + c->error_limit = DEFAULT_IO_ERROR_LIMIT; return c; err: diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c index b7166c504cdb..ba62e987b503 100644 --- a/drivers/md/bcache/sysfs.c +++ b/drivers/md/bcache/sysfs.c @@ -560,7 +560,7 @@ SHOW(__bch_cache_set) /* See count_io_errors for why 88 */ sysfs_print(io_error_halflife, c->error_decay * 88); - sysfs_print(io_error_limit, c->error_limit >> IO_ERROR_SHIFT); + sysfs_print(io_error_limit, c->error_limit); sysfs_hprint(congested, ((uint64_t) bch_get_congested(c)) << 9); @@ -660,7 +660,7 @@ STORE(__bch_cache_set) } if (attr == _io_error_limit) - c->error_limit = strtoul_or_return(buf) << IO_ERROR_SHIFT; + c->error_limit = strtoul_or_return(buf); /* See count_io_errors() for why 88 */ if (attr == _io_error_halflife) -- 2.15.1
[PATCH v2 07/12] bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags
When too many I/Os failed on cache device, bch_cache_set_error() is called in the error handling code path to retire whole problematic cache set. If new I/O requests continue to come and take refcount dc->count, the cache set won't be retired immediately, this is a problem. Further more, there are several kernel thread and self-armed kernel work may still running after bch_cache_set_error() is called. It needs to wait quite a while for them to stop, or they won't stop at all. They also prevent the cache set from being retired. The solution in this patch is, to add per cache set flag to disable I/O request on this cache and all attached backing devices. Then new coming I/O requests can be rejected in *_make_request() before taking refcount, kernel threads and self-armed kernel worker can stop very fast when flags bit CACHE_SET_IO_DISABLE is set. Because bcache also do internal I/Os for writeback, garbage collection, bucket allocation, journaling, this kind of I/O should be disabled after bch_cache_set_error() is called. So closure_bio_submit() is modified to check whether CACHE_SET_IO_DISABLE is set on cache_set->flags. If set, closure_bio_submit() will set bio->bi_status to BLK_STS_IOERR and return, generic_make_request() won't be called. A sysfs interface is also added to set or clear CACHE_SET_IO_DISABLE bit from cache_set->flags, to disable or enable cache set I/O for debugging. It is helpful to trigger more corner case issues for failed cache device. Changelog v2, - use cache_set->flags to set io disable bit, suggested by Junhui. - check CACHE_SET_IO_DISABLE in bch_btree_gc() to stop a while-loop, this is reported and inspired from origal patch of Pavel Vazharov. v1, initial version. Signed-off-by: Coly LiReviewed-by: Hannes Reinecke Cc: Junhui Tang Cc: Michael Lyle Cc: Pavel Vazharov --- drivers/md/bcache/alloc.c | 3 ++- drivers/md/bcache/bcache.h| 18 ++ drivers/md/bcache/btree.c | 10 +++--- drivers/md/bcache/io.c| 2 +- drivers/md/bcache/journal.c | 4 ++-- drivers/md/bcache/request.c | 26 +++--- drivers/md/bcache/super.c | 6 +- drivers/md/bcache/sysfs.c | 20 drivers/md/bcache/util.h | 6 -- drivers/md/bcache/writeback.c | 35 +++ 10 files changed, 101 insertions(+), 29 deletions(-) diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c index 458e1d38577d..004cc3cc6123 100644 --- a/drivers/md/bcache/alloc.c +++ b/drivers/md/bcache/alloc.c @@ -287,7 +287,8 @@ do { \ break; \ \ mutex_unlock(&(ca)->set->bucket_lock); \ - if (kthread_should_stop()) {\ + if (kthread_should_stop() ||\ + test_bit(CACHE_SET_IO_DISABLE, >set->flags)) { \ set_current_state(TASK_RUNNING);\ return 0; \ } \ diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h index 7d7512fa4f09..c41736960045 100644 --- a/drivers/md/bcache/bcache.h +++ b/drivers/md/bcache/bcache.h @@ -475,10 +475,15 @@ struct gc_stat { * * CACHE_SET_RUNNING means all cache devices have been registered and journal * replay is complete. + * + * CACHE_SET_IO_DISABLE is set when bcache is stopping the whold cache set, all + * external and internal I/O should be denied when this flag is set. + * */ #define CACHE_SET_UNREGISTERING0 #defineCACHE_SET_STOPPING 1 #defineCACHE_SET_RUNNING 2 +#define CACHE_SET_IO_DISABLE 4 struct cache_set { struct closure cl; @@ -862,6 +867,19 @@ static inline void wake_up_allocators(struct cache_set *c) wake_up_process(ca->alloc_thread); } +static inline void closure_bio_submit(struct cache_set *c, + struct bio *bio, + struct closure *cl) +{ + closure_get(cl); + if (unlikely(test_bit(CACHE_SET_IO_DISABLE, >flags))) { + bio->bi_status = BLK_STS_IOERR; + bio_endio(bio); + return; + } + generic_make_request(bio); +} + /* Forward declarations */ void bch_count_io_errors(struct cache *, blk_status_t, int, const char *); diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c index bf3a48aa9a9a..0a0bc63011b4 100644 --- a/drivers/md/bcache/btree.c +++ b/drivers/md/bcache/btree.c @@ -1744,6
[PATCH v2 08/12] bcache: stop all attached bcache devices for a retired cache set
When there are too many I/O errors on cache device, current bcache code will retire the whole cache set, and detach all bcache devices. But the detached bcache devices are not stopped, which is problematic when bcache is in writeback mode. If the retired cache set has dirty data of backing devices, continue writing to bcache device will write to backing device directly. If the LBA of write request has a dirty version cached on cache device, next time when the cache device is re-registered and backing device re-attached to it again, the stale dirty data on cache device will be written to backing device, and overwrite latest directly written data. This situation causes a quite data corruption. This patch checkes whether cache_set->io_disable is true in __cache_set_unregister(). If cache_set->io_disable is true, it means cache set is unregistering by too many I/O errors, then all attached bcache devices will be stopped as well. If cache_set->io_disable is not true, it means __cache_set_unregister() is triggered by writing 1 to sysfs file /sys/fs/bcache//bcache/stop. This is an exception because users do it explicitly, this patch keeps existing behavior and does not stop any bcache device. Even the failed cache device has no dirty data, stopping bcache device is still a desired behavior by many Ceph and data base users. Then their application will report I/O errors due to disappeared bcache device, and operation people will know the cache device is broken or disconnected. Changelog: v2: add reviewed-by from Hannes. v1: initial version for review. Signed-off-by: Coly LiReviewed-by: Hannes Reinecke Cc: Junhui Tang Cc: Michael Lyle --- drivers/md/bcache/super.c | 8 1 file changed, 8 insertions(+) diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index 4204d75aee7b..97e3bb8e1aee 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -1478,6 +1478,14 @@ static void __cache_set_unregister(struct closure *cl) dc = container_of(c->devices[i], struct cached_dev, disk); bch_cached_dev_detach(dc); + /* +* If we come here by too many I/O errors, +* bcache device should be stopped too, to +* keep data consistency on cache and +* backing devices. +*/ + if (test_bit(CACHE_SET_IO_DISABLE, >flags)) + bcache_device_stop(c->devices[i]); } else { bcache_device_stop(c->devices[i]); } -- 2.15.1
[PATCH v2 09/12] bcache: fix inaccurate io state for detached bcache devices
From: Tang JunhuiWhen we run IO in a detached device, and run iostat to shows IO status, normally it will show like bellow (Omitted some fields): Device: ... avgrq-sz avgqu-sz await r_await w_await svctm %util sdd... 15.89 0.531.820.202.23 1.81 52.30 bcache0... 15.89 115.420.000.000.00 2.40 69.60 but after IO stopped, there are still very big avgqu-sz and %util values as bellow: Device: ... avgrq-sz avgqu-sz await r_await w_await svctm %util bcache0 ... 0 5326.320.000.000.00 0.00 100.10 The reason for this issue is that, only generic_start_io_acct() called and no generic_end_io_acct() called for detached device in cached_dev_make_request(). See the code: //start generic_start_io_acct() generic_start_io_acct(q, rw, bio_sectors(bio), >disk->part0); if (cached_dev_get(dc)) { //will callback generic_end_io_acct() } else { //will not call generic_end_io_acct() } This patch calls generic_end_io_acct() in the end of IO for detached devices, so we can show IO state correctly. (Modified to use GFP_NOIO in kzalloc() by Coly Li) Signed-off-by: Tang Junhui Reviewed-by: Coly Li --- drivers/md/bcache/request.c | 58 +++-- 1 file changed, 51 insertions(+), 7 deletions(-) diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c index 02296bda6384..e09c5ae745be 100644 --- a/drivers/md/bcache/request.c +++ b/drivers/md/bcache/request.c @@ -986,6 +986,55 @@ static void cached_dev_nodata(struct closure *cl) continue_at(cl, cached_dev_bio_complete, NULL); } +struct detached_dev_io_private { + struct bcache_device*d; + unsigned long start_time; + bio_end_io_t*bi_end_io; + void*bi_private; +}; + +static void detatched_dev_end_io(struct bio *bio) +{ + struct detached_dev_io_private *ddip; + + ddip = bio->bi_private; + bio->bi_end_io = ddip->bi_end_io; + bio->bi_private = ddip->bi_private; + + generic_end_io_acct(ddip->d->disk->queue, + bio_data_dir(bio), + >d->disk->part0, ddip->start_time); + + kfree(ddip); + + bio->bi_end_io(bio); +} + +static void detached_dev_do_request(struct bcache_device *d, struct bio *bio) +{ + struct detached_dev_io_private *ddip; + struct cached_dev *dc = container_of(d, struct cached_dev, disk); + + /* +* no need to call closure_get(>disk.cl), +* because upper layer had already opened bcache device, +* which would call closure_get(>disk.cl) +*/ + ddip = kzalloc(sizeof(struct detached_dev_io_private), GFP_NOIO); + ddip->d = d; + ddip->start_time = jiffies; + ddip->bi_end_io = bio->bi_end_io; + ddip->bi_private = bio->bi_private; + bio->bi_end_io = detatched_dev_end_io; + bio->bi_private = ddip; + + if ((bio_op(bio) == REQ_OP_DISCARD) && + !blk_queue_discard(bdev_get_queue(dc->bdev))) + bio->bi_end_io(bio); + else + generic_make_request(bio); +} + /* Cached devices - read & write stuff */ static blk_qc_t cached_dev_make_request(struct request_queue *q, @@ -1028,13 +1077,8 @@ static blk_qc_t cached_dev_make_request(struct request_queue *q, else cached_dev_read(dc, s); } - } else { - if ((bio_op(bio) == REQ_OP_DISCARD) && - !blk_queue_discard(bdev_get_queue(dc->bdev))) - bio_endio(bio); - else - generic_make_request(bio); - } + } else + detached_dev_do_request(d, bio); return BLK_QC_T_NONE; } -- 2.15.1
[PATCH v2 05/12] bcache: stop dc->writeback_rate_update properly
struct delayed_work writeback_rate_update in struct cache_dev is a delayed worker to call function update_writeback_rate() in period (the interval is defined by dc->writeback_rate_update_seconds). When a metadate I/O error happens on cache device, bcache error handling routine bch_cache_set_error() will call bch_cache_set_unregister() to retire whole cache set. On the unregister code path, this delayed work is stopped by calling cancel_delayed_work_sync(>writeback_rate_update). dc->writeback_rate_update is a special delayed work from others in bcache. In its routine update_writeback_rate(), this delayed work is re-armed itself. That means when cancel_delayed_work_sync() returns, this delayed work can still be executed after several seconds defined by dc->writeback_rate_update_seconds. The problem is, after cancel_delayed_work_sync() returns, the cache set unregister code path will continue and release memory of struct cache set. Then the delayed work is scheduled to run, __update_writeback_rate() will reference the already released cache_set memory, and trigger a NULL pointer deference fault. This patch introduces two more bcache device flags, - BCACHE_DEV_WB_RUNNING bit set: bcache device is in writeback mode and running, it is OK for dc->writeback_rate_update to re-arm itself. bit clear:bcache device is trying to stop dc->writeback_rate_update, this delayed work should not re-arm itself and quit. - BCACHE_DEV_RATE_DW_RUNNING bit set: routine update_writeback_rate() is executing. bit clear: routine update_writeback_rate() quits. This patch also adds a function cancel_writeback_rate_update_dwork() to wait for dc->writeback_rate_update quits before cancel it by calling cancel_delayed_work_sync(). In order to avoid a deadlock by unexpected quit dc->writeback_rate_update, after time_out seconds this function will give up and continue to call cancel_delayed_work_sync(). And here I explain how this patch stops self re-armed delayed work properly with the above stuffs. update_writeback_rate() sets BCACHE_DEV_RATE_DW_RUNNING at its beginning and clears BCACHE_DEV_RATE_DW_RUNNING at its end. Before calling cancel_writeback_rate_update_dwork() clear flag BCACHE_DEV_WB_RUNNING. Before calling cancel_delayed_work_sync() wait utill flag BCACHE_DEV_RATE_DW_RUNNING is clear. So when calling cancel_delayed_work_sync(), dc->writeback_rate_update must be already re- armed, or quite by seeing BCACHE_DEV_WB_RUNNING cleared. In both cases delayed work routine update_writeback_rate() won't be executed after cancel_delayed_work_sync() returns. Inside update_writeback_rate() before calling schedule_delayed_work(), flag BCACHE_DEV_WB_RUNNING is checked before. If this flag is cleared, it means someone is about to stop the delayed work. Because flag BCACHE_DEV_RATE_DW_RUNNING is set already and cancel_delayed_work_sync() has to wait for this flag to be cleared, we don't need to worry about race condition here. If update_writeback_rate() is scheduled to run after checking BCACHE_DEV_RATE_DW_RUNNING and before calling cancel_delayed_work_sync() in cancel_writeback_rate_update_dwork(), it is also safe. Because at this moment BCACHE_DEV_WB_RUNNING is cleared with memory barrier. As I mentioned previously, update_writeback_rate() will see BCACHE_DEV_WB_RUNNING is clear and quit immediately. Because there are more dependences inside update_writeback_rate() to struct cache_set memory, dc->writeback_rate_update is not a simple self re-arm delayed work. After trying many different methods (e.g. hold dc->count, or use locks), this is the only way I can find which works to properly stop dc->writeback_rate_update delayed work. Changelog: v2: Try to fix the race issue which is pointed out by Junhui. v1: The initial version for review Signed-off-by: Coly LiCc: Michael Lyle Cc: Hannes Reinecke Cc: Junhui Tang --- drivers/md/bcache/bcache.h| 9 + drivers/md/bcache/super.c | 39 +++ drivers/md/bcache/sysfs.c | 3 ++- drivers/md/bcache/writeback.c | 29 - 4 files changed, 70 insertions(+), 10 deletions(-) diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h index 5e2d4e80198e..88d938c8d027 100644 --- a/drivers/md/bcache/bcache.h +++ b/drivers/md/bcache/bcache.h @@ -258,10 +258,11 @@ struct bcache_device { struct gendisk *disk; unsigned long flags; -#define BCACHE_DEV_CLOSING 0 -#define BCACHE_DEV_DETACHING 1 -#define BCACHE_DEV_UNLINK_DONE 2 - +#define BCACHE_DEV_CLOSING 0 +#define BCACHE_DEV_DETACHING 1 +#define BCACHE_DEV_UNLINK_DONE 2 +#define BCACHE_DEV_WB_RUNNING 4 +#define BCACHE_DEV_RATE_DW_RUNNING 8 unsignednr_stripes; unsignedstripe_size; atomic_t
[PATCH v2 04/12] bcache: fix cached_dev->count usage for bch_cache_set_error()
When bcache metadata I/O fails, bcache will call bch_cache_set_error() to retire the whole cache set. The expected behavior to retire a cache set is to unregister the cache set, and unregister all backing device attached to this cache set, then remove sysfs entries of the cache set and all attached backing devices, finally release memory of structs cache_set, cache, cached_dev and bcache_device. In my testing when journal I/O failure triggered by disconnected cache device, sometimes the cache set cannot be retired, and its sysfs entry /sys/fs/bcache/ still exits and the backing device also references it. This is not expected behavior. When metadata I/O failes, the call senquence to retire whole cache set is, bch_cache_set_error() bch_cache_set_unregister() bch_cache_set_stop() __cache_set_unregister() <- called as callback by calling clousre_queue(>caching) cache_set_flush()<- called as a callback when refcount of cache_set->caching is 0 cache_set_free() <- called as a callback when refcount of catch_set->cl is 0 bch_cache_set_release() <- called as a callback when refcount of catch_set->kobj is 0 I find if kernel thread bch_writeback_thread() quits while-loop when kthread_should_stop() is true and searched_full_index is false, clousre callback cache_set_flush() set by continue_at() will never be called. The result is, bcache fails to retire whole cache set. cache_set_flush() will be called when refcount of closure c->caching is 0, and in function bcache_device_detach() refcount of closure c->caching is released to 0 by clousre_put(). In metadata error code path, function bcache_device_detach() is called by cached_dev_detach_finish(). This is a callback routine being called when cached_dev->count is 0. This refcount is decreased by cached_dev_put(). The above dependence indicates, cache_set_flush() will be called when refcount of cache_set->cl is 0, and refcount of cache_set->cl to be 0 when refcount of cache_dev->count is 0. The reason why sometimes cache_dev->count is not 0 (when metadata I/O fails and bch_cache_set_error() called) is, in bch_writeback_thread(), refcount of cache_dev is not decreased properly. In bch_writeback_thread(), cached_dev_put() is called only when searched_full_index is true and cached_dev->writeback_keys is empty, a.k.a there is no dirty data on cache. In most of run time it is correct, but when bch_writeback_thread() quits the while-loop while cache is still dirty, current code forget to call cached_dev_put() before this kernel thread exits. This is why sometimes cache_set_flush() is not executed and cache set fails to be retired. The reason to call cached_dev_put() in bch_writeback_rate() is, when the cache device changes from clean to dirty, cached_dev_get() is called, to make sure during writeback operatiions both backing and cache devices won't be released. Adding following code in bch_writeback_thread() does not work, static int bch_writeback_thread(void *arg) } + if (atomic_read(>has_dirty)) + cached_dev_put() + return 0; } because writeback kernel thread can be waken up and start via sysfs entry: echo 1 > /sys/block/bcache/bcache/writeback_running It is difficult to check whether backing device is dirty without race and extra lock. So the above modification will introduce potential refcount underflow in some conditions. The correct fix is, to take cached dev refcount when creating the kernel thread, and put it before the kernel thread exits. Then bcache does not need to take a cached dev refcount when cache turns from clean to dirty, or to put a cached dev refcount when cache turns from ditry to clean. The writeback kernel thread is alwasy safe to reference data structure from cache set, cache and cached device (because a refcount of cache device is taken for it already), and no matter the kernel thread is stopped by I/O errors or system reboot, cached_dev->count can always be used correctly. The patch is simple, but understanding how it works is quite complicated. Changelog: v2: set dc->writeback_thread to NULL in this patch, as suggested by Hannes. v1: inital version for review. Signed-off-by: Coly LiReviewed-by: Hannes Reinecke Cc: Michael Lyle Cc: Junhui Tang --- drivers/md/bcache/super.c | 1 - drivers/md/bcache/writeback.c | 11 --- drivers/md/bcache/writeback.h | 2 -- 3 files changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index 133b81225ea9..d14e09cce2f6 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -1052,7 +1052,6 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct
[PATCH v2 03/12] bcache: set task properly in allocator_wait()
Kernel thread routine bch_allocator_thread() references macro allocator_wait() to wait for a condition or quit to do_exit() when kthread_should_stop() is true. Here is the code block, 284 while (1) { \ 285 set_current_state(TASK_INTERRUPTIBLE);\ 286 if (cond) \ 287 break;\ 288 \ 289 mutex_unlock(&(ca)->set->bucket_lock);\ 290 if (kthread_should_stop())\ 291 return 0; \ 292 \ 293 schedule(); \ 294 mutex_lock(&(ca)->set->bucket_lock); \ 295 } \ 296 __set_current_state(TASK_RUNNING);\ At line 285, task state is set to TASK_INTERRUPTIBLE, if at line 290 kthread_should_stop() is true, the kernel thread will terminate and return to kernel/kthread.s:kthread(), then calls do_exit() with TASK_INTERRUPTIBLE state. This is not a suggested behavior and a warning message will be reported by might_sleep() in do_exit() code path: "WARNING: do not call blocking ops when !TASK_RUNNING; state=1 set at []". This patch fixes this problem by setting task state to TASK_RUNNING if kthread_should_stop() is true and before kernel thread returns back to kernel/kthread.s:kthread(). Changelog: v2: fix the race issue in v1 patch. v1: initial buggy fix. Signed-off-by: Coly LiCc: Michael Lyle Cc: Hannes Reinecke Cc: Junhui Tang --- drivers/md/bcache/alloc.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c index 6cc6c0f9c3a9..458e1d38577d 100644 --- a/drivers/md/bcache/alloc.c +++ b/drivers/md/bcache/alloc.c @@ -287,8 +287,10 @@ do { \ break; \ \ mutex_unlock(&(ca)->set->bucket_lock); \ - if (kthread_should_stop()) \ + if (kthread_should_stop()) {\ + set_current_state(TASK_RUNNING);\ return 0; \ + } \ \ schedule(); \ mutex_lock(&(ca)->set->bucket_lock);\ -- 2.15.1
[PATCH v2 02/12] bcache: properly set task state in bch_writeback_thread()
Kernel thread routine bch_writeback_thread() has the following code block, 447 down_write(>writeback_lock); 448~450 if (check conditions) { 451 up_write(>writeback_lock); 452 set_current_state(TASK_INTERRUPTIBLE); 453 454 if (kthread_should_stop()) 455 return 0; 456 457 schedule(); 458 continue; 459 } If condition check is true, its task state is set to TASK_INTERRUPTIBLE and call schedule() to wait for others to wake up it. There are 2 issues in current code, 1, Task state is set to TASK_INTERRUPTIBLE after the condition checks, if another process changes the condition and call wake_up_process(dc-> writeback_thread), then at line 452 task state is set back to TASK_INTERRUPTIBLE, the writeback kernel thread will lose a chance to be waken up. 2, At line 454 if kthread_should_stop() is true, writeback kernel thread will return to kernel/kthread.c:kthread() with TASK_INTERRUPTIBLE and call do_exit(). It is not good to enter do_exit() with task state TASK_INTERRUPTIBLE, in following code path might_sleep() is called and a warning message is reported by __might_sleep(): "WARNING: do not call blocking ops when !TASK_RUNNING; state=1 set at []". For the first issue, task state should be set before condition checks. Ineed because dc->writeback_lock is required when modifying all the conditions, calling set_current_state() inside code block where dc-> writeback_lock is hold is safe. But this is quite implicit, so I still move set_current_state() before all the condition checks. For the second issue, frankley speaking it does not hurt when kernel thread exits with TASK_INTERRUPTIBLE state, but this warning message scares users, makes them feel there might be something risky with bcache and hurt their data. Setting task state to TASK_RUNNING before returning fixes this problem. Changelog: v2: fix the race issue in v1 patch. v1: initial buggy fix. Signed-off-by: Coly LiCc: Michael Lyle Cc: Hannes Reinecke Cc: Junhui Tang --- drivers/md/bcache/writeback.c | 7 +-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c index 0ade883b6316..f1d2fc15abcc 100644 --- a/drivers/md/bcache/writeback.c +++ b/drivers/md/bcache/writeback.c @@ -564,18 +564,21 @@ static int bch_writeback_thread(void *arg) while (!kthread_should_stop()) { down_write(>writeback_lock); + set_current_state(TASK_INTERRUPTIBLE); if (!atomic_read(>has_dirty) || (!test_bit(BCACHE_DEV_DETACHING, >disk.flags) && !dc->writeback_running)) { up_write(>writeback_lock); - set_current_state(TASK_INTERRUPTIBLE); - if (kthread_should_stop()) + if (kthread_should_stop()) { + set_current_state(TASK_RUNNING); return 0; + } schedule(); continue; } + set_current_state(TASK_RUNNING); searched_full_index = refill_dirty(dc); -- 2.15.1
[PATCH v2 00/12] bcache: device failure handling improvement
Hi maintainers and folks, This patch set tries to improve bcache device failure handling, including cache device and backing device failures. The basic idea to handle failed cache device is, - Unregister cache set - Detach all backing devices attached to this cache set - Stop all bcache devices linked to this cache set The above process is named 'cache set retire' by me. The result of cache set retire is, cache set and bcache devices are all removed, following I/O requests will get failed immediately to notift upper layer or user space coce that the cache device is failed or disconnected. For failed backing device, there are two ways to handle them, - If device is disconnected, when kernel thread dc->status_update_thread finds it is offline for BACKING_DEV_OFFLINE_TIMEOUT (5) seconds, the kernel thread will set dc->io_disable and call bcache_device_stop() to stop and remove the bcache device from system. - If device is connected but too many I/O errors happen, after errors number exceeds dc->error_limit, call bch_cached_dev_error() to set dc->io_disable and stop bcache device. Then the broken backing device and its bcache device will be removed from system. The v2 patch set fixes the problems addressed in v1 patch reviews, adds failure handling for backing device. This patch set also includes a patch from Junhui Tang. And the v2 patch set does not include 2 patches which are in bcache-for-next already. A basic testing covered with writethrough, writeback, writearound mode, and read/write/readwrite workloads, cache set or bcache device can be removed by too many I/O errors or delete the device. For plugging out physical disks, a kernel bug triggers rcu oops in __do_softirq() and locks up all following accesses to the disconnected disk, this blocks my testing. While posting v2 patch set, I also continue to test the code from my side. Any comment, question and review are warmly welcome. Open issues: 1, Detach backing device by writing sysfs detach file does not work, it is because writeback thread does not drop dc->count refcount when cache device turns from dirty into clean. This issue will be fixed in v3 patch set. 2, A kernel bug in __do_softirq() when plugging out hard disk with heavy I/O blocks my physical disk disconnection test. If any one knows this bug, please give me a hint. Changelog: v2: fixes all problems found in v1 review. add patches to handle backing device failure. add one more patch to set writeback_rate_update_seconds range. include a patch from Junhui Tang. v1: the initial version, only handles cache device failure. Coly Li (11): bcache: set writeback_rate_update_seconds in range [1, 60] seconds bcache: properly set task state in bch_writeback_thread() bcache: set task properly in allocator_wait() bcache: fix cached_dev->count usage for bch_cache_set_error() bcache: stop dc->writeback_rate_update properly bcache: set error_limit correctly bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags bcache: stop all attached bcache devices for a retired cache set bcache: add backing_request_endio() for bi_end_io of attached backing device I/O bcache: add io_disable to struct cached_dev bcache: stop bcache device when backing device is offline Tang Junhui (1): bcache: fix inaccurate io state for detached bcache devices drivers/md/bcache/alloc.c | 5 +- drivers/md/bcache/bcache.h| 37 - drivers/md/bcache/btree.c | 10 ++- drivers/md/bcache/io.c| 16 +++- drivers/md/bcache/journal.c | 4 +- drivers/md/bcache/request.c | 188 +++--- drivers/md/bcache/super.c | 134 -- drivers/md/bcache/sysfs.c | 45 +- drivers/md/bcache/util.h | 6 -- drivers/md/bcache/writeback.c | 79 +++--- drivers/md/bcache/writeback.h | 5 +- 11 files changed, 458 insertions(+), 71 deletions(-) -- 2.15.1
[PATCH v2 01/12] bcache: set writeback_rate_update_seconds in range [1, 60] seconds
dc->writeback_rate_update_seconds can be set via sysfs and its value can be set to [1, ULONG_MAX]. It does not make sense to set such a large value, 60 seconds is long enough value considering the default 5 seconds works well for long time. Because dc->writeback_rate_update is a special delayed work, it re-arms itself inside the delayed work routine update_writeback_rate(). When stopping it by cancel_delayed_work_sync(), there should be a timeout to wait and make sure the re-armed delayed work is stopped too. A small max value of dc->writeback_rate_update_seconds is also helpful to decide a reasonable small timeout. This patch limits sysfs interface to set dc->writeback_rate_update_seconds in range of [1, 60] seconds, and replaces the hand-coded number by macros. Signed-off-by: Coly Li--- drivers/md/bcache/sysfs.c | 3 +++ drivers/md/bcache/writeback.c | 2 +- drivers/md/bcache/writeback.h | 3 +++ 3 files changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c index b4184092c727..a74a752c9e0f 100644 --- a/drivers/md/bcache/sysfs.c +++ b/drivers/md/bcache/sysfs.c @@ -215,6 +215,9 @@ STORE(__cached_dev) sysfs_strtoul_clamp(writeback_rate, dc->writeback_rate.rate, 1, INT_MAX); + sysfs_strtoul_clamp(writeback_rate_update_seconds, + dc->writeback_rate_update_seconds, + 1, WRITEBACK_RATE_UPDATE_SECS_MAX); d_strtoul_nonzero(writeback_rate_update_seconds); d_strtoul(writeback_rate_i_term_inverse); d_strtoul_nonzero(writeback_rate_p_term_inverse); diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c index 51306a19ab03..0ade883b6316 100644 --- a/drivers/md/bcache/writeback.c +++ b/drivers/md/bcache/writeback.c @@ -652,7 +652,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc) dc->writeback_rate.rate = 1024; dc->writeback_rate_minimum = 8; - dc->writeback_rate_update_seconds = 5; + dc->writeback_rate_update_seconds = WRITEBACK_RATE_UPDATE_SECS_DEFAULT; dc->writeback_rate_p_term_inverse = 40; dc->writeback_rate_i_term_inverse = 1; diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h index 66f1c527fa24..587b25599856 100644 --- a/drivers/md/bcache/writeback.h +++ b/drivers/md/bcache/writeback.h @@ -8,6 +8,9 @@ #define MAX_WRITEBACKS_IN_PASS 5 #define MAX_WRITESIZE_IN_PASS 5000 /* *512b */ +#define WRITEBACK_RATE_UPDATE_SECS_MAX 60 +#define WRITEBACK_RATE_UPDATE_SECS_DEFAULT 5 + /* * 14 (16384ths) is chosen here as something that each backing device * should be a reasonable fraction of the share, and not to blow up -- 2.15.1
[PATCH v2 06/12] bcache: set error_limit correctly
Struct cache uses io_errors for two purposes, - Error decay: when cache set error_decay is set, io_errors is used to generate a small piece of delay when I/O error happens. - I/O errors counter: in order to generate big enough value for error decay, I/O errors counter value is stored by left shifting 20 bits (a.k.a IO_ERROR_SHIFT). In function bch_count_io_errors(), if I/O errors counter reaches cache set error limit, bch_cache_set_error() will be called to retire the whold cache set. But current code is problematic when checking the error limit, see the following code piece from bch_count_io_errors(), 90 if (error) { 91 char buf[BDEVNAME_SIZE]; 92 unsigned errors = atomic_add_return(1 << IO_ERROR_SHIFT, 93 >io_errors); 94 errors >>= IO_ERROR_SHIFT; 95 96 if (errors < ca->set->error_limit) 97 pr_err("%s: IO error on %s, recovering", 98bdevname(ca->bdev, buf), m); 99 else 100 bch_cache_set_error(ca->set, 101 "%s: too many IO errors %s", 102 bdevname(ca->bdev, buf), m); 103 } At line 94, errors is right shifting IO_ERROR_SHIFT bits, now it is real errors counter to compare at line 96. But ca->set->error_limit is initia- lized with an amplified value in bch_cache_set_alloc(), 1545 c->error_limit = 8 << IO_ERROR_SHIFT; It means by default, in bch_count_io_errors(), before 8<<20 errors happened bch_cache_set_error() won't be called to retire the problematic cache device. If the average request size is 64KB, it means bcache won't handle failed device until 512GB data is requested. This is too large to be an I/O threashold. So I believe the correct error limit should be much less. This patch sets default cache set error limit to 8, then in bch_count_io_errors() when errors counter reaches 8 (if it is default value), function bch_cache_set_error() will be called to retire the whole cache set. This patch also removes bits shifting when store or show io_error_limit value via sysfs interface. Nowadays most of SSDs handle internal flash failure automatically by LBA address re-indirect mapping. If an I/O error can be observed by upper layer code, it will be a notable error because that SSD can not re-indirect map the problematic LBA address to an available flash block. This situation indicates the whole SSD will be failed very soon. Therefore setting 8 as the default io error limit value makes sense, it is enough for most of cache devices. Changelog: v2: add reviewed-by from Hannes. v1: initial version for review. Signed-off-by: Coly LiReviewed-by: Hannes Reinecke Cc: Michael Lyle Cc: Junhui Tang --- drivers/md/bcache/bcache.h | 1 + drivers/md/bcache/super.c | 2 +- drivers/md/bcache/sysfs.c | 4 ++-- 3 files changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h index 88d938c8d027..7d7512fa4f09 100644 --- a/drivers/md/bcache/bcache.h +++ b/drivers/md/bcache/bcache.h @@ -663,6 +663,7 @@ struct cache_set { ON_ERROR_UNREGISTER, ON_ERROR_PANIC, } on_error; +#define DEFAULT_IO_ERROR_LIMIT 8 unsignederror_limit; unsignederror_decay; diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index 6d888e8fea8c..a373648b5d4b 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -1583,7 +1583,7 @@ struct cache_set *bch_cache_set_alloc(struct cache_sb *sb) c->congested_read_threshold_us = 2000; c->congested_write_threshold_us = 2; - c->error_limit = 8 << IO_ERROR_SHIFT; + c->error_limit = DEFAULT_IO_ERROR_LIMIT; return c; err: diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c index b7166c504cdb..ba62e987b503 100644 --- a/drivers/md/bcache/sysfs.c +++ b/drivers/md/bcache/sysfs.c @@ -560,7 +560,7 @@ SHOW(__bch_cache_set) /* See count_io_errors for why 88 */ sysfs_print(io_error_halflife, c->error_decay * 88); - sysfs_print(io_error_limit, c->error_limit >> IO_ERROR_SHIFT); + sysfs_print(io_error_limit, c->error_limit); sysfs_hprint(congested, ((uint64_t) bch_get_congested(c)) << 9); @@ -660,7 +660,7 @@ STORE(__bch_cache_set) } if (attr == _io_error_limit) - c->error_limit = strtoul_or_return(buf) << IO_ERROR_SHIFT; + c->error_limit = strtoul_or_return(buf); /* See count_io_errors() for why 88 */ if (attr == _io_error_halflife) -- 2.15.1
[PATCH v2 05/12] bcache: stop dc->writeback_rate_update properly
struct delayed_work writeback_rate_update in struct cache_dev is a delayed worker to call function update_writeback_rate() in period (the interval is defined by dc->writeback_rate_update_seconds). When a metadate I/O error happens on cache device, bcache error handling routine bch_cache_set_error() will call bch_cache_set_unregister() to retire whole cache set. On the unregister code path, this delayed work is stopped by calling cancel_delayed_work_sync(>writeback_rate_update). dc->writeback_rate_update is a special delayed work from others in bcache. In its routine update_writeback_rate(), this delayed work is re-armed itself. That means when cancel_delayed_work_sync() returns, this delayed work can still be executed after several seconds defined by dc->writeback_rate_update_seconds. The problem is, after cancel_delayed_work_sync() returns, the cache set unregister code path will continue and release memory of struct cache set. Then the delayed work is scheduled to run, __update_writeback_rate() will reference the already released cache_set memory, and trigger a NULL pointer deference fault. This patch introduces two more bcache device flags, - BCACHE_DEV_WB_RUNNING bit set: bcache device is in writeback mode and running, it is OK for dc->writeback_rate_update to re-arm itself. bit clear:bcache device is trying to stop dc->writeback_rate_update, this delayed work should not re-arm itself and quit. - BCACHE_DEV_RATE_DW_RUNNING bit set: routine update_writeback_rate() is executing. bit clear: routine update_writeback_rate() quits. This patch also adds a function cancel_writeback_rate_update_dwork() to wait for dc->writeback_rate_update quits before cancel it by calling cancel_delayed_work_sync(). In order to avoid a deadlock by unexpected quit dc->writeback_rate_update, after time_out seconds this function will give up and continue to call cancel_delayed_work_sync(). And here I explain how this patch stops self re-armed delayed work properly with the above stuffs. update_writeback_rate() sets BCACHE_DEV_RATE_DW_RUNNING at its beginning and clears BCACHE_DEV_RATE_DW_RUNNING at its end. Before calling cancel_writeback_rate_update_dwork() clear flag BCACHE_DEV_WB_RUNNING. Before calling cancel_delayed_work_sync() wait utill flag BCACHE_DEV_RATE_DW_RUNNING is clear. So when calling cancel_delayed_work_sync(), dc->writeback_rate_update must be already re- armed, or quite by seeing BCACHE_DEV_WB_RUNNING cleared. In both cases delayed work routine update_writeback_rate() won't be executed after cancel_delayed_work_sync() returns. Inside update_writeback_rate() before calling schedule_delayed_work(), flag BCACHE_DEV_WB_RUNNING is checked before. If this flag is cleared, it means someone is about to stop the delayed work. Because flag BCACHE_DEV_RATE_DW_RUNNING is set already and cancel_delayed_work_sync() has to wait for this flag to be cleared, we don't need to worry about race condition here. If update_writeback_rate() is scheduled to run after checking BCACHE_DEV_RATE_DW_RUNNING and before calling cancel_delayed_work_sync() in cancel_writeback_rate_update_dwork(), it is also safe. Because at this moment BCACHE_DEV_WB_RUNNING is cleared with memory barrier. As I mentioned previously, update_writeback_rate() will see BCACHE_DEV_WB_RUNNING is clear and quit immediately. Because there are more dependences inside update_writeback_rate() to struct cache_set memory, dc->writeback_rate_update is not a simple self re-arm delayed work. After trying many different methods (e.g. hold dc->count, or use locks), this is the only way I can find which works to properly stop dc->writeback_rate_update delayed work. Changelog: v2: Try to fix the race issue which is pointed out by Junhui. v1: The initial version for review Signed-off-by: Coly LiCc: Michael Lyle Cc: Hannes Reinecke Cc: Junhui Tang --- drivers/md/bcache/bcache.h| 9 + drivers/md/bcache/super.c | 39 +++ drivers/md/bcache/sysfs.c | 3 ++- drivers/md/bcache/writeback.c | 29 - 4 files changed, 70 insertions(+), 10 deletions(-) diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h index 5e2d4e80198e..88d938c8d027 100644 --- a/drivers/md/bcache/bcache.h +++ b/drivers/md/bcache/bcache.h @@ -258,10 +258,11 @@ struct bcache_device { struct gendisk *disk; unsigned long flags; -#define BCACHE_DEV_CLOSING 0 -#define BCACHE_DEV_DETACHING 1 -#define BCACHE_DEV_UNLINK_DONE 2 - +#define BCACHE_DEV_CLOSING 0 +#define BCACHE_DEV_DETACHING 1 +#define BCACHE_DEV_UNLINK_DONE 2 +#define BCACHE_DEV_WB_RUNNING 4 +#define BCACHE_DEV_RATE_DW_RUNNING 8 unsignednr_stripes; unsignedstripe_size; atomic_t
[PATCH v2 00/12] bcache: device failure handling improvement
Hi maintainers and folks, This patch set tries to improve bcache device failure handling, including cache device and backing device failures. The basic idea to handle failed cache device is, - Unregister cache set - Detach all backing devices attached to this cache set - Stop all bcache devices linked to this cache set The above process is named 'cache set retire' by me. The result of cache set retire is, cache set and bcache devices are all removed, following I/O requests will get failed immediately to notift upper layer or user space coce that the cache device is failed or disconnected. For failed backing device, there are two ways to handle them, - If device is disconnected, when kernel thread dc->status_update_thread finds it is offline for BACKING_DEV_OFFLINE_TIMEOUT (5) seconds, the kernel thread will set dc->io_disable and call bcache_device_stop() to stop and remove the bcache device from system. - If device is connected but too many I/O errors happen, after errors number exceeds dc->error_limit, call bch_cached_dev_error() to set dc->io_disable and stop bcache device. Then the broken backing device and its bcache device will be removed from system. The v2 patch set fixes the problems addressed in v1 patch reviews, adds failure handling for backing device. This patch set also includes a patch from Junhui Tang. And the v2 patch set does not include 2 patches which are in bcache-for-next already. A basic testing covered with writethrough, writeback, writearound mode, and read/write/readwrite workloads, cache set or bcache device can be removed by too many I/O errors or delete the device. For plugging out physical disks, a kernel bug triggers rcu oops in __do_softirq() and locks up all following accesses to the disconnected disk, this blocks my testing. While posting v2 patch set, I also continue to test the code from my side. Any comment, question and review are warmly welcome. Open issues: 1, Detach backing device by writing sysfs detach file does not work, it is because writeback thread does not drop dc->count refcount when cache device turns from dirty into clean. This issue will be fixed in v3 patch set. 2, A kernel bug in __do_softirq() when plugging out hard disk with heavy I/O blocks my physical disk disconnection test. If any one knows this bug, please give me a hint. Changelog: v2: fixes all problems found in v1 review. add patches to handle backing device failure. add one more patch to set writeback_rate_update_seconds range. include a patch from Junhui Tang. v1: the initial version, only handles cache device failure. Coly Li (11): bcache: set writeback_rate_update_seconds in range [1, 60] seconds bcache: properly set task state in bch_writeback_thread() bcache: set task properly in allocator_wait() bcache: fix cached_dev->count usage for bch_cache_set_error() bcache: stop dc->writeback_rate_update properly bcache: set error_limit correctly bcache: add CACHE_SET_IO_DISABLE to struct cache_set flags bcache: stop all attached bcache devices for a retired cache set bcache: add backing_request_endio() for bi_end_io of attached backing device I/O bcache: add io_disable to struct cached_dev bcache: stop bcache device when backing device is offline Tang Junhui (1): bcache: fix inaccurate io state for detached bcache devices drivers/md/bcache/alloc.c | 5 +- drivers/md/bcache/bcache.h| 37 - drivers/md/bcache/btree.c | 10 ++- drivers/md/bcache/io.c| 16 +++- drivers/md/bcache/journal.c | 4 +- drivers/md/bcache/request.c | 188 +++--- drivers/md/bcache/super.c | 134 -- drivers/md/bcache/sysfs.c | 45 +- drivers/md/bcache/util.h | 6 -- drivers/md/bcache/writeback.c | 79 +++--- drivers/md/bcache/writeback.h | 5 +- 11 files changed, 458 insertions(+), 71 deletions(-) -- 2.15.1
[PATCH v2 01/12] bcache: set writeback_rate_update_seconds in range [1, 60] seconds
dc->writeback_rate_update_seconds can be set via sysfs and its value can be set to [1, ULONG_MAX]. It does not make sense to set such a large value, 60 seconds is long enough value considering the default 5 seconds works well for long time. Because dc->writeback_rate_update is a special delayed work, it re-arms itself inside the delayed work routine update_writeback_rate(). When stopping it by cancel_delayed_work_sync(), there should be a timeout to wait and make sure the re-armed delayed work is stopped too. A small max value of dc->writeback_rate_update_seconds is also helpful to decide a reasonable small timeout. This patch limits sysfs interface to set dc->writeback_rate_update_seconds in range of [1, 60] seconds, and replaces the hand-coded number by macros. Signed-off-by: Coly Li--- drivers/md/bcache/sysfs.c | 3 +++ drivers/md/bcache/writeback.c | 2 +- drivers/md/bcache/writeback.h | 3 +++ 3 files changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c index b4184092c727..a74a752c9e0f 100644 --- a/drivers/md/bcache/sysfs.c +++ b/drivers/md/bcache/sysfs.c @@ -215,6 +215,9 @@ STORE(__cached_dev) sysfs_strtoul_clamp(writeback_rate, dc->writeback_rate.rate, 1, INT_MAX); + sysfs_strtoul_clamp(writeback_rate_update_seconds, + dc->writeback_rate_update_seconds, + 1, WRITEBACK_RATE_UPDATE_SECS_MAX); d_strtoul_nonzero(writeback_rate_update_seconds); d_strtoul(writeback_rate_i_term_inverse); d_strtoul_nonzero(writeback_rate_p_term_inverse); diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c index 51306a19ab03..0ade883b6316 100644 --- a/drivers/md/bcache/writeback.c +++ b/drivers/md/bcache/writeback.c @@ -652,7 +652,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc) dc->writeback_rate.rate = 1024; dc->writeback_rate_minimum = 8; - dc->writeback_rate_update_seconds = 5; + dc->writeback_rate_update_seconds = WRITEBACK_RATE_UPDATE_SECS_DEFAULT; dc->writeback_rate_p_term_inverse = 40; dc->writeback_rate_i_term_inverse = 1; diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h index 66f1c527fa24..587b25599856 100644 --- a/drivers/md/bcache/writeback.h +++ b/drivers/md/bcache/writeback.h @@ -8,6 +8,9 @@ #define MAX_WRITEBACKS_IN_PASS 5 #define MAX_WRITESIZE_IN_PASS 5000 /* *512b */ +#define WRITEBACK_RATE_UPDATE_SECS_MAX 60 +#define WRITEBACK_RATE_UPDATE_SECS_DEFAULT 5 + /* * 14 (16384ths) is chosen here as something that each backing device * should be a reasonable fraction of the share, and not to blow up -- 2.15.1
[PATCH v2 02/12] bcache: properly set task state in bch_writeback_thread()
Kernel thread routine bch_writeback_thread() has the following code block, 447 down_write(>writeback_lock); 448~450 if (check conditions) { 451 up_write(>writeback_lock); 452 set_current_state(TASK_INTERRUPTIBLE); 453 454 if (kthread_should_stop()) 455 return 0; 456 457 schedule(); 458 continue; 459 } If condition check is true, its task state is set to TASK_INTERRUPTIBLE and call schedule() to wait for others to wake up it. There are 2 issues in current code, 1, Task state is set to TASK_INTERRUPTIBLE after the condition checks, if another process changes the condition and call wake_up_process(dc-> writeback_thread), then at line 452 task state is set back to TASK_INTERRUPTIBLE, the writeback kernel thread will lose a chance to be waken up. 2, At line 454 if kthread_should_stop() is true, writeback kernel thread will return to kernel/kthread.c:kthread() with TASK_INTERRUPTIBLE and call do_exit(). It is not good to enter do_exit() with task state TASK_INTERRUPTIBLE, in following code path might_sleep() is called and a warning message is reported by __might_sleep(): "WARNING: do not call blocking ops when !TASK_RUNNING; state=1 set at []". For the first issue, task state should be set before condition checks. Ineed because dc->writeback_lock is required when modifying all the conditions, calling set_current_state() inside code block where dc-> writeback_lock is hold is safe. But this is quite implicit, so I still move set_current_state() before all the condition checks. For the second issue, frankley speaking it does not hurt when kernel thread exits with TASK_INTERRUPTIBLE state, but this warning message scares users, makes them feel there might be something risky with bcache and hurt their data. Setting task state to TASK_RUNNING before returning fixes this problem. Changelog: v2: fix the race issue in v1 patch. v1: initial buggy fix. Signed-off-by: Coly LiCc: Michael Lyle Cc: Hannes Reinecke Cc: Junhui Tang --- drivers/md/bcache/writeback.c | 7 +-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c index 0ade883b6316..f1d2fc15abcc 100644 --- a/drivers/md/bcache/writeback.c +++ b/drivers/md/bcache/writeback.c @@ -564,18 +564,21 @@ static int bch_writeback_thread(void *arg) while (!kthread_should_stop()) { down_write(>writeback_lock); + set_current_state(TASK_INTERRUPTIBLE); if (!atomic_read(>has_dirty) || (!test_bit(BCACHE_DEV_DETACHING, >disk.flags) && !dc->writeback_running)) { up_write(>writeback_lock); - set_current_state(TASK_INTERRUPTIBLE); - if (kthread_should_stop()) + if (kthread_should_stop()) { + set_current_state(TASK_RUNNING); return 0; + } schedule(); continue; } + set_current_state(TASK_RUNNING); searched_full_index = refill_dirty(dc); -- 2.15.1
[PATCH v2 04/12] bcache: fix cached_dev->count usage for bch_cache_set_error()
When bcache metadata I/O fails, bcache will call bch_cache_set_error() to retire the whole cache set. The expected behavior to retire a cache set is to unregister the cache set, and unregister all backing device attached to this cache set, then remove sysfs entries of the cache set and all attached backing devices, finally release memory of structs cache_set, cache, cached_dev and bcache_device. In my testing when journal I/O failure triggered by disconnected cache device, sometimes the cache set cannot be retired, and its sysfs entry /sys/fs/bcache/ still exits and the backing device also references it. This is not expected behavior. When metadata I/O failes, the call senquence to retire whole cache set is, bch_cache_set_error() bch_cache_set_unregister() bch_cache_set_stop() __cache_set_unregister() <- called as callback by calling clousre_queue(>caching) cache_set_flush()<- called as a callback when refcount of cache_set->caching is 0 cache_set_free() <- called as a callback when refcount of catch_set->cl is 0 bch_cache_set_release() <- called as a callback when refcount of catch_set->kobj is 0 I find if kernel thread bch_writeback_thread() quits while-loop when kthread_should_stop() is true and searched_full_index is false, clousre callback cache_set_flush() set by continue_at() will never be called. The result is, bcache fails to retire whole cache set. cache_set_flush() will be called when refcount of closure c->caching is 0, and in function bcache_device_detach() refcount of closure c->caching is released to 0 by clousre_put(). In metadata error code path, function bcache_device_detach() is called by cached_dev_detach_finish(). This is a callback routine being called when cached_dev->count is 0. This refcount is decreased by cached_dev_put(). The above dependence indicates, cache_set_flush() will be called when refcount of cache_set->cl is 0, and refcount of cache_set->cl to be 0 when refcount of cache_dev->count is 0. The reason why sometimes cache_dev->count is not 0 (when metadata I/O fails and bch_cache_set_error() called) is, in bch_writeback_thread(), refcount of cache_dev is not decreased properly. In bch_writeback_thread(), cached_dev_put() is called only when searched_full_index is true and cached_dev->writeback_keys is empty, a.k.a there is no dirty data on cache. In most of run time it is correct, but when bch_writeback_thread() quits the while-loop while cache is still dirty, current code forget to call cached_dev_put() before this kernel thread exits. This is why sometimes cache_set_flush() is not executed and cache set fails to be retired. The reason to call cached_dev_put() in bch_writeback_rate() is, when the cache device changes from clean to dirty, cached_dev_get() is called, to make sure during writeback operatiions both backing and cache devices won't be released. Adding following code in bch_writeback_thread() does not work, static int bch_writeback_thread(void *arg) } + if (atomic_read(>has_dirty)) + cached_dev_put() + return 0; } because writeback kernel thread can be waken up and start via sysfs entry: echo 1 > /sys/block/bcache/bcache/writeback_running It is difficult to check whether backing device is dirty without race and extra lock. So the above modification will introduce potential refcount underflow in some conditions. The correct fix is, to take cached dev refcount when creating the kernel thread, and put it before the kernel thread exits. Then bcache does not need to take a cached dev refcount when cache turns from clean to dirty, or to put a cached dev refcount when cache turns from ditry to clean. The writeback kernel thread is alwasy safe to reference data structure from cache set, cache and cached device (because a refcount of cache device is taken for it already), and no matter the kernel thread is stopped by I/O errors or system reboot, cached_dev->count can always be used correctly. The patch is simple, but understanding how it works is quite complicated. Changelog: v2: set dc->writeback_thread to NULL in this patch, as suggested by Hannes. v1: inital version for review. Signed-off-by: Coly LiReviewed-by: Hannes Reinecke Cc: Michael Lyle Cc: Junhui Tang --- drivers/md/bcache/super.c | 1 - drivers/md/bcache/writeback.c | 11 --- drivers/md/bcache/writeback.h | 2 -- 3 files changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c index 133b81225ea9..d14e09cce2f6 100644 --- a/drivers/md/bcache/super.c +++ b/drivers/md/bcache/super.c @@ -1052,7 +1052,6 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct
[PATCH v2 03/12] bcache: set task properly in allocator_wait()
Kernel thread routine bch_allocator_thread() references macro allocator_wait() to wait for a condition or quit to do_exit() when kthread_should_stop() is true. Here is the code block, 284 while (1) { \ 285 set_current_state(TASK_INTERRUPTIBLE);\ 286 if (cond) \ 287 break;\ 288 \ 289 mutex_unlock(&(ca)->set->bucket_lock);\ 290 if (kthread_should_stop())\ 291 return 0; \ 292 \ 293 schedule(); \ 294 mutex_lock(&(ca)->set->bucket_lock); \ 295 } \ 296 __set_current_state(TASK_RUNNING);\ At line 285, task state is set to TASK_INTERRUPTIBLE, if at line 290 kthread_should_stop() is true, the kernel thread will terminate and return to kernel/kthread.s:kthread(), then calls do_exit() with TASK_INTERRUPTIBLE state. This is not a suggested behavior and a warning message will be reported by might_sleep() in do_exit() code path: "WARNING: do not call blocking ops when !TASK_RUNNING; state=1 set at []". This patch fixes this problem by setting task state to TASK_RUNNING if kthread_should_stop() is true and before kernel thread returns back to kernel/kthread.s:kthread(). Changelog: v2: fix the race issue in v1 patch. v1: initial buggy fix. Signed-off-by: Coly LiCc: Michael Lyle Cc: Hannes Reinecke Cc: Junhui Tang --- drivers/md/bcache/alloc.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c index 6cc6c0f9c3a9..458e1d38577d 100644 --- a/drivers/md/bcache/alloc.c +++ b/drivers/md/bcache/alloc.c @@ -287,8 +287,10 @@ do { \ break; \ \ mutex_unlock(&(ca)->set->bucket_lock); \ - if (kthread_should_stop()) \ + if (kthread_should_stop()) {\ + set_current_state(TASK_RUNNING);\ return 0; \ + } \ \ schedule(); \ mutex_lock(&(ca)->set->bucket_lock);\ -- 2.15.1
Re: [PATCH v2 3/4] scsi: Avoid that .queuecommand() gets called for a quiesced SCSI device
On Fri, Jan 12, 2018 at 10:45:57PM +, Bart Van Assche wrote: > On Thu, 2018-01-11 at 10:23 +0800, Ming Lei wrote: > > > not sufficient to prevent .queuecommand() calls from scsi_send_eh_cmnd(). > > > > Given it is error handling, do we need to prevent the .queuecommand() call > > in scsi_send_eh_cmnd()? Could you share us what the actual issue > > observed is from user view? > > Please have a look at the kernel bug report in the description of patch 4/4 of > this series. Thanks for your mentioning, then I can find the following comment in srp_queuecommand(): /* * The SCSI EH thread is the only context from which srp_queuecommand() * can get invoked for blocked devices (SDEV_BLOCK / * SDEV_CREATED_BLOCK). Avoid racing with srp_reconnect_rport() by * locking the rport mutex if invoked from inside the SCSI EH. */ That means EH request is allowed to send to blocked device. I also replied in patch 4/4, looks there is one simple one line change which should address the issue of 'sleep in atomic context', please discuss that in patch 4/4 thread. -- Ming
Re: [PATCH v2 4/4] IB/srp: Fix a sleep-in-invalid-context bug
On Wed, Jan 10, 2018 at 10:18:17AM -0800, Bart Van Assche wrote: > The previous two patches guarantee that srp_queuecommand() does not get > invoked while reconnecting occurs. Hence remove the code from > srp_queuecommand() that prevents command queueing while reconnecting. > This patch avoids that the following can appear in the kernel log: > > BUG: sleeping function called from invalid context at > kernel/locking/mutex.c:747 > in_atomic(): 1, irqs_disabled(): 0, pid: 5600, name: scsi_eh_9 > 1 lock held by scsi_eh_9/5600: > #0: (rcu_read_lock){}, at: [] > __blk_mq_run_hw_queue+0xf1/0x1e0 > Preemption disabled at: > [<139badf2>] __blk_mq_delay_run_hw_queue+0x78/0xf0 > CPU: 9 PID: 5600 Comm: scsi_eh_9 Tainted: GW4.15.0-rc4-dbg+ #1 > Hardware name: Dell Inc. PowerEdge R720/0VWT90, BIOS 2.5.4 01/22/2016 > Call Trace: > dump_stack+0x67/0x99 > ___might_sleep+0x16a/0x250 [ib_srp] > __mutex_lock+0x46/0x9d0 > srp_queuecommand+0x356/0x420 [ib_srp] > scsi_dispatch_cmd+0xf6/0x3f0 > scsi_queue_rq+0x4a8/0x5f0 > blk_mq_dispatch_rq_list+0x73/0x440 > blk_mq_sched_dispatch_requests+0x109/0x1a0 > __blk_mq_run_hw_queue+0x131/0x1e0 > __blk_mq_delay_run_hw_queue+0x9a/0xf0 > blk_mq_run_hw_queue+0xc0/0x1e0 > blk_mq_start_hw_queues+0x2c/0x40 > scsi_run_queue+0x18e/0x2d0 > scsi_run_host_queues+0x22/0x40 > scsi_error_handler+0x18d/0x5f0 > kthread+0x11c/0x140 > ret_from_fork+0x24/0x30 > > Signed-off-by: Bart Van Assche > Reviewed-by: Hannes Reinecke > Cc: Jason Gunthorpe > Cc: Doug Ledford > --- > drivers/infiniband/ulp/srp/ib_srp.c | 21 ++--- > 1 file changed, 2 insertions(+), 19 deletions(-) > > diff --git a/drivers/infiniband/ulp/srp/ib_srp.c > b/drivers/infiniband/ulp/srp/ib_srp.c > index 972d4b3c5223..670f187ecb91 100644 > --- a/drivers/infiniband/ulp/srp/ib_srp.c > +++ b/drivers/infiniband/ulp/srp/ib_srp.c > @@ -2149,7 +2149,6 @@ static void srp_handle_qp_err(struct ib_cq *cq, struct > ib_wc *wc, > static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd) > { > struct srp_target_port *target = host_to_target(shost); > - struct srp_rport *rport = target->rport; > struct srp_rdma_ch *ch; > struct srp_request *req; > struct srp_iu *iu; > @@ -2159,16 +2158,6 @@ static int srp_queuecommand(struct Scsi_Host *shost, > struct scsi_cmnd *scmnd) > u32 tag; > u16 idx; > int len, ret; > - const bool in_scsi_eh = !in_interrupt() && current == shost->ehandler; > - > - /* > - * The SCSI EH thread is the only context from which srp_queuecommand() > - * can get invoked for blocked devices (SDEV_BLOCK / > - * SDEV_CREATED_BLOCK). Avoid racing with srp_reconnect_rport() by > - * locking the rport mutex if invoked from inside the SCSI EH. > - */ > - if (in_scsi_eh) > - mutex_lock(>mutex); This issue is triggered because that the above EH handler context detection is wrong since scsi_run_host_queues() from scsi_error_handler() is for handling normal requests, see the comment below: /* * finally we need to re-initiate requests that may be pending. we will * have had everything blocked while error handling is taking place, and * now that error recovery is done, we will need to ensure that these * requests are started. */ scsi_run_host_queues(shost); This issue should have been fixed by the following one line change simply: in_scsi_eh = !!scmd->eh_action; -- Ming
Re: [PATCH V3 0/5] dm-rq: improve sequential I/O performance
On Fri, Jan 12 2018 at 8:37pm -0500, Mike Snitzerwrote: > On Fri, Jan 12 2018 at 8:00pm -0500, > Bart Van Assche wrote: > > > On Fri, 2018-01-12 at 19:52 -0500, Mike Snitzer wrote: > > > It was 50 ms before it was 100 ms. No real explaination for these > > > values other than they seem to make Bart's IB SRP testbed happy? > > > > But that constant was not introduced by me in the dm code. > > No actually it was (not that there's anything wrong with that): > > commit 06eb061f48594aa369f6e852b352410298b317a8 > Author: Bart Van Assche > Date: Fri Apr 7 16:50:44 2017 -0700 > > dm mpath: requeue after a small delay if blk_get_request() fails > > If blk_get_request() returns ENODEV then multipath_clone_and_map() > causes a request to be requeued immediately. This can cause a kworker > thread to spend 100% of the CPU time of a single core in > __blk_mq_run_hw_queue() and also can cause device removal to never > finish. > > Avoid this by only requeuing after a delay if blk_get_request() fails. > Additionally, reduce the requeue delay. > > Cc: sta...@vger.kernel.org # 4.9+ > Signed-off-by: Bart Van Assche > Signed-off-by: Mike Snitzer > > Note that this commit actually details a different case where a > blk_get_request() (in existing code) return of -ENODEV is a very > compelling case to use DM_MAPIO_DELAY_REQUEUE. > > SO I'll revisit what is appropriate in multipath_clone_and_map() on > Monday. Sleep helped. I had another look and it is only the old .request_fn blk_get_request() code that even sets -ENODEV (if blk_queue_dying). But thankfully the blk_get_request() error handling in multipath_clone_and_map() checks for blk_queue_dying() and will return DM_MAPIO_DELAY_REQUEUE. So we're all set for this case. Mike
Re: [PATCH V3 0/5] dm-rq: improve sequential I/O performance
On Sat, Jan 13 2018 at 10:04am -0500, Ming Leiwrote: > On Fri, Jan 12, 2018 at 05:31:17PM -0500, Mike Snitzer wrote: > > > > Ming or Jens: might you be able to shed some light on how dm-mpath > > would/could set BLK_MQ_S_SCHED_RESTART? A new function added that can > > When BLK_STS_RESOURCE is returned from .queue_rq(), blk_mq_dispatch_rq_list() > will check if BLK_MQ_S_SCHED_RESTART is set. > > If it has been set, the queue won't be rerun for this request, and the queue > will be rerun until one in-flight request is completed, see > blk_mq_sched_restart() > which is called from blk_mq_free_request(). > > If BLK_MQ_S_SCHED_RESTART isn't set, queue is rerun in > blk_mq_dispatch_rq_list(), > and BLK_MQ_S_SCHED_RESTART is set before calling .queue_rq(), see > blk_mq_sched_mark_restart_hctx() which is called in > blk_mq_sched_dispatch_requests(). > > This mechanism can avoid continuous running queue in case of STS_RESOURCE, > that > means drivers wouldn't worry about that by adding random delay. Great, thanks for the overview. Really appreciate it. Mike
Re: [PATCH V3 0/5] dm-rq: improve sequential I/O performance
On Fri, Jan 12, 2018 at 05:31:17PM -0500, Mike Snitzer wrote: > On Fri, Jan 12 2018 at 1:54pm -0500, > Bart Van Asschewrote: > > > On Fri, 2018-01-12 at 13:06 -0500, Mike Snitzer wrote: > > > OK, you have the stage: please give me a pointer to your best > > > explaination of the several. > > > > Since the previous discussion about this topic occurred more than a month > > ago it could take more time to look up an explanation than to explain it > > again. Anyway, here we go. As you know a block layer request queue needs to > > be rerun if one or more requests are waiting and a previous condition that > > prevented the request to be executed has been cleared. For the dm-mpath > > driver, examples of such conditions are no tags available, a path that is > > busy (see also pgpath_busy()), path initialization that is in progress > > (pg_init_in_progress) or a request completes with status, e.g. if the > > SCSI core calls __blk_mq_end_request(req, error) with error != 0. For some > > of these conditions, e.g. path initialization completes, a callback > > function in the dm-mpath driver is called and it is possible to explicitly > > rerun the queue. I agree that for such scenario's a delayed queue run should > > not be triggered. For other scenario's, e.g. if a SCSI initiator submits a > > SCSI request over a fabric and the SCSI target replies with "BUSY" then the > > SCSI core will end the I/O request with status BLK_STS_RESOURCE after the > > maximum number of retries has been reached (see also scsi_io_completion()). > > In that last case, if a SCSI target sends a "BUSY" reply over the wire back > > to the initiator, there is no other approach for the SCSI initiator to > > figure out whether it can queue another request than to resubmit the > > request. The worst possible strategy is to resubmit a request immediately > > because that will cause a significant fraction of the fabric bandwidth to > > be used just for replying "BUSY" to requests that can't be processed > > immediately. > > The thing is, the 2 scenarios you are most concerned about have > _nothing_ to do with dm_mq_queue_rq() at all. They occur as an error in > the IO path _after_ the request is successfully retrieved with > blk_get_request() and then submitted. > > > The intention of commit 6077c2d706097c0 was to address the last mentioned > > case. > > So it really makes me question why you think commit 6077c2d706097c0 > addresses the issue you think it does. And gives me more conviction to > remove 6077c2d706097c0. > > It may help just by virtue of blindly kicking the queue if > blk_get_request() fails (regardless of the target is responding with > BUSY or not). Very unsatisfying to say the least. > > I think it'd be much more beneficial for dm-mpath.c:multipath_end_io() > to be trained to be respond more intelligently to BLK_STS_RESOURCE. > > E.g. set BLK_MQ_S_SCHED_RESTART if requests are known to be outstanding > on the path. This is one case where Ming said the queue would be > re-run, as detailed in this header: > https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-4.16=5b18cff4baedde77e0d69bd62a13ae78f9488d89 > > And Jens has reinforced to me that BLK_MQ_S_SCHED_RESTART is a means to > kicking the queue more efficiently. _BUT_ I'm not seeing any external > blk-mq interface that exposes this capability to a blk-mq driver. As is > BLK_MQ_S_SCHED_RESTART gets set very much in the bowels of blk-mq > (blk_mq_sched_mark_restart_hctx). > > SO I have to do more homework here... > > Ming or Jens: might you be able to shed some light on how dm-mpath > would/could set BLK_MQ_S_SCHED_RESTART? A new function added that can When BLK_STS_RESOURCE is returned from .queue_rq(), blk_mq_dispatch_rq_list() will check if BLK_MQ_S_SCHED_RESTART is set. If it has been set, the queue won't be rerun for this request, and the queue will be rerun until one in-flight request is completed, see blk_mq_sched_restart() which is called from blk_mq_free_request(). If BLK_MQ_S_SCHED_RESTART isn't set, queue is rerun in blk_mq_dispatch_rq_list(), and BLK_MQ_S_SCHED_RESTART is set before calling .queue_rq(), see blk_mq_sched_mark_restart_hctx() which is called in blk_mq_sched_dispatch_requests(). This mechanism can avoid continuous running queue in case of STS_RESOURCE, that means drivers wouldn't worry about that by adding random delay. -- Ming
Re: [PATCHSET v5] blk-mq: reimplement timeout handling
On Fri, Jan 12, 2018 at 04:55:34PM -0500, Laurence Oberman wrote: > On Fri, 2018-01-12 at 20:57 +, Bart Van Assche wrote: > > On Tue, 2018-01-09 at 08:29 -0800, Tejun Heo wrote: > > > Currently, blk-mq timeout path synchronizes against the usual > > > issue/completion path using a complex scheme involving atomic > > > bitflags, REQ_ATOM_*, memory barriers and subtle memory coherence > > > rules. Unfortunatley, it contains quite a few holes. > > > > Hello Tejun, > > > > With this patch series applied I see weird hangs in blk_mq_get_tag() > > when I > > run the srp-test software. If I pull Jens' latest for-next branch and > > revert > > this patch series then the srp-test software runs successfully. Note: > > if you > > don't have InfiniBand hardware available then you will need the > > RDMA/CM > > patches for the SRP initiator and target drivers that have been > > posted > > recently on the linux-rdma mailing list to run the srp-test software. > > > > This is how I run the srp-test software in a VM: > > > > ./run_tests -c -d -r 10 > > > > Here is an example of what SysRq-w reported when the hang occurred: > > > > sysrq: SysRq : Show Blocked State > > taskPC stack pid father > > kworker/u8:0D12864 5 2 0x8000 > > Workqueue: events_unbound sd_probe_async [sd_mod] > > Call Trace: > > ? __schedule+0x2b4/0xbb0 > > schedule+0x2d/0x90 > > io_schedule+0xd/0x30 > > blk_mq_get_tag+0x169/0x290 > > ? finish_wait+0x80/0x80 > > blk_mq_get_request+0x16a/0x4f0 > > blk_mq_alloc_request+0x59/0xc0 > > blk_get_request_flags+0x3f/0x260 > > scsi_execute+0x33/0x1e0 [scsi_mod] > > read_capacity_16.part.35+0x9c/0x460 [sd_mod] > > sd_revalidate_disk+0x14bb/0x1cb0 [sd_mod] > > sd_probe_async+0xf2/0x1a0 [sd_mod] > > process_one_work+0x21c/0x6d0 > > worker_thread+0x35/0x380 > > ? process_one_work+0x6d0/0x6d0 > > kthread+0x117/0x130 > > ? kthread_create_worker_on_cpu+0x40/0x40 > > ret_from_fork+0x24/0x30 > > systemd-udevd D13672 1048285 0x0100 > > Call Trace: > > ? __schedule+0x2b4/0xbb0 > > schedule+0x2d/0x90 > > io_schedule+0xd/0x30 > > generic_file_read_iter+0x32f/0x970 > > ? page_cache_tree_insert+0x100/0x100 > > __vfs_read+0xcc/0x120 > > vfs_read+0x96/0x140 > > SyS_read+0x40/0xa0 > > do_syscall_64+0x5f/0x1b0 > > entry_SYSCALL64_slow_path+0x25/0x25 > > RIP: 0033:0x7f8ce6d08d11 > > RSP: 002b:7fff96dec288 EFLAGS: 0246 ORIG_RAX: > > > > RAX: ffda RBX: 5651de7f6e10 RCX: 7f8ce6d08d11 > > RDX: 0040 RSI: 5651de7f6e38 RDI: 0007 > > RBP: 5651de7ea500 R08: 7f8ce6cf1c20 R09: 5651de7f6e10 > > R10: 006f R11: 0246 R12: 01ff > > R13: 01ff0040 R14: 5651de7ea550 R15: 0040 > > systemd-udevd D13496 1049285 0x0100 > > Call Trace: > > ? __schedule+0x2b4/0xbb0 > > schedule+0x2d/0x90 > > io_schedule+0xd/0x30 > > blk_mq_get_tag+0x169/0x290 > > ? finish_wait+0x80/0x80 > > blk_mq_get_request+0x16a/0x4f0 > > blk_mq_make_request+0x105/0x8e0 > > ? generic_make_request+0xd6/0x3d0 > > generic_make_request+0x103/0x3d0 > > ? submit_bio+0x57/0x110 > > submit_bio+0x57/0x110 > > mpage_readpages+0x13b/0x160 > > ? I_BDEV+0x10/0x10 > > ? rcu_read_lock_sched_held+0x66/0x70 > > ? __alloc_pages_nodemask+0x2e8/0x360 > > __do_page_cache_readahead+0x2a4/0x370 > > ? force_page_cache_readahead+0xaf/0x110 > > force_page_cache_readahead+0xaf/0x110 > > generic_file_read_iter+0x743/0x970 > > ? find_held_lock+0x2d/0x90 > > ? _raw_spin_unlock+0x29/0x40 > > __vfs_read+0xcc/0x120 > > vfs_read+0x96/0x140 > > SyS_read+0x40/0xa0 > > do_syscall_64+0x5f/0x1b0 > > entry_SYSCALL64_slow_path+0x25/0x25 > > RIP: 0033:0x7f8ce6d08d11 > > RSP: 002b:7fff96dec8b8 EFLAGS: 0246 ORIG_RAX: > > > > RAX: ffda RBX: 7f8ce7085010 RCX: 7f8ce6d08d11 > > RDX: 0004 RSI: 7f8ce7085038 RDI: 000f > > RBP: 5651de7ec840 R08: R09: 7f8ce7085010 > > R10: 7f8ce7085028 R11: 0246 R12: > > R13: 0004 R14: 5651de7ec890 R15: 0004 > > systemd-udevd D13672 1055285 0x0100 > > Call Trace: > > ? __schedule+0x2b4/0xbb0 > > schedule+0x2d/0x90 > > io_schedule+0xd/0x30 > > blk_mq_get_tag+0x169/0x290 > > ? finish_wait+0x80/0x80 > > blk_mq_get_request+0x16a/0x4f0 > > blk_mq_make_request+0x105/0x8e0 > > ? generic_make_request+0xd6/0x3d0 > > generic_make_request+0x103/0x3d0 > > ? submit_bio+0x57/0x110 > > submit_bio+0x57/0x110 > > mpage_readpages+0x13b/0x160 > > ? I_BDEV+0x10/0x10 > > ? rcu_read_lock_sched_held+0x66/0x70 > > ? __alloc_pages_nodemask+0x2e8/0x360 > > __do_page_cache_readahead+0x2a4/0x370 > > ? force_page_cache_readahead+0xaf/0x110 > > force_page_cache_readahead+0xaf/0x110 > > generic_file_read_iter+0x743/0x970 > > __vfs_read+0xcc/0x120 > > vfs_read+0x96/0x140 > > SyS_read+0x40/0xa0 > > do_syscall_64+0x5f/0x1b0 > >
Re: [PATCH V3 0/5] dm-rq: improve sequential I/O performance
On Fri, Jan 12, 2018 at 06:54:49PM +, Bart Van Assche wrote: > On Fri, 2018-01-12 at 13:06 -0500, Mike Snitzer wrote: > > OK, you have the stage: please give me a pointer to your best > > explaination of the several. > > Since the previous discussion about this topic occurred more than a month > ago it could take more time to look up an explanation than to explain it > again. Anyway, here we go. As you know a block layer request queue needs to > be rerun if one or more requests are waiting and a previous condition that > prevented the request to be executed has been cleared. For the dm-mpath > driver, examples of such conditions are no tags available, a path that is > busy (see also pgpath_busy()), path initialization that is in progress > (pg_init_in_progress) or a request completes with status, e.g. if the > SCSI core calls __blk_mq_end_request(req, error) with error != 0. For some > of these conditions, e.g. path initialization completes, a callback > function in the dm-mpath driver is called and it is possible to explicitly > rerun the queue. I agree that for such scenario's a delayed queue run should > not be triggered. For other scenario's, e.g. if a SCSI initiator submits a > SCSI request over a fabric and the SCSI target replies with "BUSY" then the > SCSI core will end the I/O request with status BLK_STS_RESOURCE after the > maximum number of retries has been reached (see also scsi_io_completion()). > In that last case, if a SCSI target sends a "BUSY" reply over the wire back > to the initiator, there is no other approach for the SCSI initiator to > figure out whether it can queue another request than to resubmit the > request. The worst possible strategy is to resubmit a request immediately > because that will cause a significant fraction of the fabric bandwidth to > be used just for replying "BUSY" to requests that can't be processed > immediately. That isn't true, when BLK_STS_RESOURCE is returned to blk-mq, blk-mq will apply BLK_MQ_S_SCHED_RESTART to hold the queue until one in-flight request is completed, please see blk_mq_sched_restart() which is called from blk_mq_free_request(). Also now we have IO schedulers, when blk_get_request() in dm-mpath returns NULL, it doesn't provide underlying queue's BUSY accurately or in time, since at default size of scheduler tags is double size of driver tags. So it isn't good to depend blk_get_request() only to evaluate queue's busy status, this patchset provides underlying's dispatch result directly to blk-mq, and can deal with this case much better. > > The intention of commit 6077c2d706097c0 was to address the last mentioned > case. It may be possible to move the delayed queue rerun from the > dm_queue_rq() into dm_requeue_original_request(). But I think it would be > wrong to rerun the queue immediately in case a SCSI target system returns > "BUSY". Again, queue won't be rerun immediately after STS_RESOURCE is returned to blk-mq. And BLK_MQ_S_SCHED_RESTART should address your concern on continuous resubmission in case of running out of requests, right? Thanks, Ming
Re: [PATCH BUGFIX/IMPROVEMENT 0/2] block, bfq: two pending patches
Hi. 13.01.2018 12:05, Paolo Valente wrote: Hi Jens, here are again the two pending patches you asked me to resend [1]. One of them, fixing read-starvation problems, was accompanied by a cover letter. I'm pasting the content of that cover letter below. The patch addresses (serious) starvation problems caused by request-tag exhaustion, as explained in more detail in the commit message. I started from the solution in the function kyber_limit_depth, but then I had to define more articulate limits, to counter starvation also in cases not covered in kyber_limit_depth. If this solution proves to be effective, I'm willing to port it somehow to the other schedulers. Thanks, Paolo [1] https://www.spinics.net/lists/linux-block/msg21586.html Paolo Valente (2): block, bfq: limit tags for writes and async I/O block, bfq: limit sectors served with interactive weight raising block/bfq-iosched.c | 158 +--- block/bfq-iosched.h | 17 ++ block/bfq-wf2q.c| 3 + 3 files changed, 169 insertions(+), 9 deletions(-) -- 2.15.1 I'm running the system with these patches since the end of December, so with regard to stability and visible smoke: Tested-by: Oleksandr Natalenkofor both of them. Many thanks, Paolo!
Re: [for-4.16 PATCH v6 2/4] block: properly protect the 'queue' kobj in blk_unregister_queue
On Fri, Jan 12, 2018 at 11:03:52AM -0500, Mike Snitzer wrote: > The original commit e9a823fb34a8b (block: fix warning when I/O elevator > is changed as request_queue is being removed) is pretty conflated. > "conflated" because the resource being protected by q->sysfs_lock isn't > the queue_flags (it is the 'queue' kobj). > > q->sysfs_lock serializes __elevator_change() (via elv_iosched_store) > from racing with blk_unregister_queue(): > 1) By holding q->sysfs_lock first, __elevator_change() can complete > before a racing blk_unregister_queue(). > 2) Conversely, __elevator_change() is testing for QUEUE_FLAG_REGISTERED > in case elv_iosched_store() loses the race with blk_unregister_queue(), > it needs a way to know the 'queue' kobj isn't there. > > Expand the scope of blk_unregister_queue()'s q->sysfs_lock use so it is > held until after the 'queue' kobj is removed. > > To do so blk_mq_unregister_dev() must not also take q->sysfs_lock. So > rename __blk_mq_unregister_dev() to blk_mq_unregister_dev(). > > Also, blk_unregister_queue() should use q->queue_lock to protect against > any concurrent writes to q->queue_flags -- even though chances are the > queue is being cleaned up so no concurrent writes are likely. > > Fixes: e9a823fb34a8b ("block: fix warning when I/O elevator is changed as > request_queue is being removed") > Signed-off-by: Mike Snitzer> --- > block/blk-mq-sysfs.c | 9 + > block/blk-sysfs.c| 13 ++--- > 2 files changed, 11 insertions(+), 11 deletions(-) > > v6: blk_mq_unregister_dev now requires q->sysfs_lock be held, Ming: I am > not seeing any lockdep complaints with this. I've tested bio-based, > blk-mq and old .request_fn request-based. > > diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c > index 79969c3c234f..a54b4b070f1c 100644 > --- a/block/blk-mq-sysfs.c > +++ b/block/blk-mq-sysfs.c > @@ -248,7 +248,7 @@ static int blk_mq_register_hctx(struct blk_mq_hw_ctx > *hctx) > return ret; > } > > -static void __blk_mq_unregister_dev(struct device *dev, struct request_queue > *q) > +void blk_mq_unregister_dev(struct device *dev, struct request_queue *q) > { > struct blk_mq_hw_ctx *hctx; > int i; > @@ -265,13 +265,6 @@ static void __blk_mq_unregister_dev(struct device *dev, > struct request_queue *q) > q->mq_sysfs_init_done = false; > } > > -void blk_mq_unregister_dev(struct device *dev, struct request_queue *q) > -{ > - mutex_lock(>sysfs_lock); > - __blk_mq_unregister_dev(dev, q); > - mutex_unlock(>sysfs_lock); > -} > - > void blk_mq_hctx_kobj_init(struct blk_mq_hw_ctx *hctx) > { > kobject_init(>kobj, _mq_hw_ktype); > diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c > index 870484eaed1f..9272452ff456 100644 > --- a/block/blk-sysfs.c > +++ b/block/blk-sysfs.c > @@ -929,12 +929,17 @@ void blk_unregister_queue(struct gendisk *disk) > if (WARN_ON(!q)) > return; > > + /* > + * Protect against the 'queue' kobj being accessed > + * while/after it is removed. > + */ > mutex_lock(>sysfs_lock); > - queue_flag_clear_unlocked(QUEUE_FLAG_REGISTERED, q); > - mutex_unlock(>sysfs_lock); > > - wbt_exit(q); > + spin_lock_irq(q->queue_lock); > + queue_flag_clear(QUEUE_FLAG_REGISTERED, q); > + spin_unlock_irq(q->queue_lock); > > + wbt_exit(q); > > if (q->mq_ops) > blk_mq_unregister_dev(disk_to_dev(disk), q); > @@ -946,4 +951,6 @@ void blk_unregister_queue(struct gendisk *disk) > kobject_del(>kobj); > blk_trace_remove_sysfs(disk_to_dev(disk)); > kobject_put(_to_dev(disk)->kobj); > + > + mutex_unlock(>sysfs_lock); > } Reviewed-by: Ming Lei -- Ming
Re: [PATCHSET v5] blk-mq: reimplement timeout handling
On Fri, Jan 12, 2018 at 04:55:34PM -0500, Laurence Oberman wrote: > On Fri, 2018-01-12 at 20:57 +, Bart Van Assche wrote: > > On Tue, 2018-01-09 at 08:29 -0800, Tejun Heo wrote: > > > Currently, blk-mq timeout path synchronizes against the usual > > > issue/completion path using a complex scheme involving atomic > > > bitflags, REQ_ATOM_*, memory barriers and subtle memory coherence > > > rules. Unfortunatley, it contains quite a few holes. > > > > Hello Tejun, > > > > With this patch series applied I see weird hangs in blk_mq_get_tag() > > when I > > run the srp-test software. If I pull Jens' latest for-next branch and > > revert > > this patch series then the srp-test software runs successfully. Note: > > if you > > don't have InfiniBand hardware available then you will need the > > RDMA/CM > > patches for the SRP initiator and target drivers that have been > > posted > > recently on the linux-rdma mailing list to run the srp-test software. > > > > This is how I run the srp-test software in a VM: > > > > ./run_tests -c -d -r 10 > > > > Here is an example of what SysRq-w reported when the hang occurred: > > > > sysrq: SysRq : Show Blocked State > > taskPC stack pid father > > kworker/u8:0D12864 5 2 0x8000 > > Workqueue: events_unbound sd_probe_async [sd_mod] > > Call Trace: > > ? __schedule+0x2b4/0xbb0 > > schedule+0x2d/0x90 > > io_schedule+0xd/0x30 > > blk_mq_get_tag+0x169/0x290 > > ? finish_wait+0x80/0x80 > > blk_mq_get_request+0x16a/0x4f0 > > blk_mq_alloc_request+0x59/0xc0 > > blk_get_request_flags+0x3f/0x260 > > scsi_execute+0x33/0x1e0 [scsi_mod] > > read_capacity_16.part.35+0x9c/0x460 [sd_mod] > > sd_revalidate_disk+0x14bb/0x1cb0 [sd_mod] > > sd_probe_async+0xf2/0x1a0 [sd_mod] > > process_one_work+0x21c/0x6d0 > > worker_thread+0x35/0x380 > > ? process_one_work+0x6d0/0x6d0 > > kthread+0x117/0x130 > > ? kthread_create_worker_on_cpu+0x40/0x40 > > ret_from_fork+0x24/0x30 > > systemd-udevd D13672 1048285 0x0100 > > Call Trace: > > ? __schedule+0x2b4/0xbb0 > > schedule+0x2d/0x90 > > io_schedule+0xd/0x30 > > generic_file_read_iter+0x32f/0x970 > > ? page_cache_tree_insert+0x100/0x100 > > __vfs_read+0xcc/0x120 > > vfs_read+0x96/0x140 > > SyS_read+0x40/0xa0 > > do_syscall_64+0x5f/0x1b0 > > entry_SYSCALL64_slow_path+0x25/0x25 > > RIP: 0033:0x7f8ce6d08d11 > > RSP: 002b:7fff96dec288 EFLAGS: 0246 ORIG_RAX: > > > > RAX: ffda RBX: 5651de7f6e10 RCX: 7f8ce6d08d11 > > RDX: 0040 RSI: 5651de7f6e38 RDI: 0007 > > RBP: 5651de7ea500 R08: 7f8ce6cf1c20 R09: 5651de7f6e10 > > R10: 006f R11: 0246 R12: 01ff > > R13: 01ff0040 R14: 5651de7ea550 R15: 0040 > > systemd-udevd D13496 1049285 0x0100 > > Call Trace: > > ? __schedule+0x2b4/0xbb0 > > schedule+0x2d/0x90 > > io_schedule+0xd/0x30 > > blk_mq_get_tag+0x169/0x290 > > ? finish_wait+0x80/0x80 > > blk_mq_get_request+0x16a/0x4f0 > > blk_mq_make_request+0x105/0x8e0 > > ? generic_make_request+0xd6/0x3d0 > > generic_make_request+0x103/0x3d0 > > ? submit_bio+0x57/0x110 > > submit_bio+0x57/0x110 > > mpage_readpages+0x13b/0x160 > > ? I_BDEV+0x10/0x10 > > ? rcu_read_lock_sched_held+0x66/0x70 > > ? __alloc_pages_nodemask+0x2e8/0x360 > > __do_page_cache_readahead+0x2a4/0x370 > > ? force_page_cache_readahead+0xaf/0x110 > > force_page_cache_readahead+0xaf/0x110 > > generic_file_read_iter+0x743/0x970 > > ? find_held_lock+0x2d/0x90 > > ? _raw_spin_unlock+0x29/0x40 > > __vfs_read+0xcc/0x120 > > vfs_read+0x96/0x140 > > SyS_read+0x40/0xa0 > > do_syscall_64+0x5f/0x1b0 > > entry_SYSCALL64_slow_path+0x25/0x25 > > RIP: 0033:0x7f8ce6d08d11 > > RSP: 002b:7fff96dec8b8 EFLAGS: 0246 ORIG_RAX: > > > > RAX: ffda RBX: 7f8ce7085010 RCX: 7f8ce6d08d11 > > RDX: 0004 RSI: 7f8ce7085038 RDI: 000f > > RBP: 5651de7ec840 R08: R09: 7f8ce7085010 > > R10: 7f8ce7085028 R11: 0246 R12: > > R13: 0004 R14: 5651de7ec890 R15: 0004 > > systemd-udevd D13672 1055285 0x0100 > > Call Trace: > > ? __schedule+0x2b4/0xbb0 > > schedule+0x2d/0x90 > > io_schedule+0xd/0x30 > > blk_mq_get_tag+0x169/0x290 > > ? finish_wait+0x80/0x80 > > blk_mq_get_request+0x16a/0x4f0 > > blk_mq_make_request+0x105/0x8e0 > > ? generic_make_request+0xd6/0x3d0 > > generic_make_request+0x103/0x3d0 > > ? submit_bio+0x57/0x110 > > submit_bio+0x57/0x110 > > mpage_readpages+0x13b/0x160 > > ? I_BDEV+0x10/0x10 > > ? rcu_read_lock_sched_held+0x66/0x70 > > ? __alloc_pages_nodemask+0x2e8/0x360 > > __do_page_cache_readahead+0x2a4/0x370 > > ? force_page_cache_readahead+0xaf/0x110 > > force_page_cache_readahead+0xaf/0x110 > > generic_file_read_iter+0x743/0x970 > > __vfs_read+0xcc/0x120 > > vfs_read+0x96/0x140 > > SyS_read+0x40/0xa0 > > do_syscall_64+0x5f/0x1b0 > >
[PATCH BUGFIX/IMPROVEMENT 1/2] block, bfq: limit tags for writes and async I/O
Asynchronous I/O can easily starve synchronous I/O (both sync reads and sync writes), by consuming all request tags. Similarly, storms of synchronous writes, such as those that sync(2) may trigger, can starve synchronous reads. In their turn, these two problems may also cause BFQ to loose control on latency for interactive and soft real-time applications. For example, on a PLEXTOR PX-256M5S SSD, LibreOffice Writer takes 0.6 seconds to start if the device is idle, but it takes more than 45 seconds (!) if there are sequential writes in the background. This commit addresses this issue by limiting the maximum percentage of tags that asynchronous I/O requests and synchronous write requests can consume. In particular, this commit grants a higher threshold to synchronous writes, to prevent the latter from being starved by asynchronous I/O. According to the above test, LibreOffice Writer now starts in about 1.2 seconds on average, regardless of the background workload, and apart from some rare outlier. To check this improvement, run, e.g., sudo ./comm_startup_lat.sh bfq 5 5 seq 10 "lowriter --terminate_after_init" for the comm_startup_lat benchmark in the S suite [1]. [1] https://github.com/Algodev-github/S Tested-by: Oleksandr NatalenkoTested-by: Holger Hoffstätte Signed-off-by: Paolo Valente --- block/bfq-iosched.c | 77 + block/bfq-iosched.h | 12 + 2 files changed, 89 insertions(+) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 1caeecad7af1..527bd2ccda51 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -417,6 +417,82 @@ static struct request *bfq_choose_req(struct bfq_data *bfqd, } } +/* + * See the comments on bfq_limit_depth for the purpose of + * the depths set in the function. + */ +static void bfq_update_depths(struct bfq_data *bfqd, struct sbitmap_queue *bt) +{ + bfqd->sb_shift = bt->sb.shift; + + /* +* In-word depths if no bfq_queue is being weight-raised: +* leaving 25% of tags only for sync reads. +* +* In next formulas, right-shift the value +* (1U sb_shift - something)), to be robust against +* any possible value of bfqd->sb_shift, without having to +* limit 'something'. +*/ + /* no more than 50% of tags for async I/O */ + bfqd->word_depths[0][0] = max((1U >1, 1U); + /* +* no more than 75% of tags for sync writes (25% extra tags +* w.r.t. async I/O, to prevent async I/O from starving sync +* writes) +*/ + bfqd->word_depths[0][1] = max(((1U >2, 1U); + + /* +* In-word depths in case some bfq_queue is being weight- +* raised: leaving ~63% of tags for sync reads. This is the +* highest percentage for which, in our tests, application +* start-up times didn't suffer from any regression due to tag +* shortage. +*/ + /* no more than ~18% of tags for async I/O */ + bfqd->word_depths[1][0] = max(((1U >4, 1U); + /* no more than ~37% of tags for sync writes (~20% extra tags) */ + bfqd->word_depths[1][1] = max(((1U >4, 1U); +} + +/* + * Async I/O can easily starve sync I/O (both sync reads and sync + * writes), by consuming all tags. Similarly, storms of sync writes, + * such as those that sync(2) may trigger, can starve sync reads. + * Limit depths of async I/O and sync writes so as to counter both + * problems. + */ +static void bfq_limit_depth(unsigned int op, struct blk_mq_alloc_data *data) +{ + struct blk_mq_tags *tags = blk_mq_tags_from_data(data); + struct bfq_data *bfqd = data->q->elevator->elevator_data; + struct sbitmap_queue *bt; + + if (op_is_sync(op) && !op_is_write(op)) + return; + + if (data->flags & BLK_MQ_REQ_RESERVED) { + if (unlikely(!tags->nr_reserved_tags)) { + WARN_ON_ONCE(1); + return; + } + bt = >breserved_tags; + } else + bt = >bitmap_tags; + + if (unlikely(bfqd->sb_shift != bt->sb.shift)) + bfq_update_depths(bfqd, bt); + + data->shallow_depth = + bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)]; + + bfq_log(bfqd, "[%s] wr_busy %d sync %d depth %u", + __func__, bfqd->wr_busy_queues, op_is_sync(op), + data->shallow_depth); +} + static struct bfq_queue * bfq_rq_pos_tree_lookup(struct bfq_data *bfqd, struct rb_root *root, sector_t sector, struct rb_node **ret_parent, @@ -5267,6 +5343,7 @@ static struct elv_fs_entry bfq_attrs[] = { static struct elevator_type iosched_bfq_mq = { .ops.mq = { +
[PATCH BUGFIX/IMPROVEMENT 2/2] block, bfq: limit sectors served with interactive weight raising
To maximise responsiveness, BFQ raises the weight, and performs device idling, for bfq_queues associated with processes deemed as interactive. In particular, weight raising has a maximum duration, equal to the time needed to start a large application. If a weight-raised process goes on doing I/O beyond this maximum duration, it loses weight-raising. This mechanism is evidently vulnerable to the following false positives: I/O-bound applications that will go on doing I/O for much longer than the duration of weight-raising. These applications have basically no benefit from being weight-raised at the beginning of their I/O. On the opposite end, while being weight-raised, these applications a) unjustly steal throughput to applications that may truly need low latency; b) make BFQ uselessly perform device idling; device idling results in loss of device throughput with most flash-based storage, and may increase latencies when used purposelessly. This commit adds a countermeasure to reduce both the above problems. To introduce this countermeasure, we provide the following extra piece of information (full details in the comments added by this commit). During the start-up of the large application used as a reference to set the duration of weight-raising, involved processes transfer at most ~110K sectors each. Accordingly, a process initially deemed as interactive has no right to be weight-raised any longer, once transferred 110K sectors or more. Basing on this consideration, this commit early-ends weight-raising for a bfq_queue if the latter happens to have received an amount of service at least equal to 110K sectors (actually, a little bit more, to keep a safety margin). I/O-bound applications that reach a high throughput, such as file copy, get to this threshold much before the allowed weight-raising period finishes. Thus this early ending of weight-raising reduces the amount of time during which these applications cause the problems described above. Tested-by: Oleksandr NatalenkoTested-by: Holger Hoffstätte Signed-off-by: Paolo Valente --- block/bfq-iosched.c | 81 +++-- block/bfq-iosched.h | 5 block/bfq-wf2q.c| 3 ++ 3 files changed, 80 insertions(+), 9 deletions(-) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index 527bd2ccda51..93a97a7fe519 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -209,15 +209,17 @@ static struct kmem_cache *bfq_pool; * interactive applications automatically, using the following formula: * duration = (R / r) * T, where r is the peak rate of the device, and * R and T are two reference parameters. - * In particular, R is the peak rate of the reference device (see below), - * and T is a reference time: given the systems that are likely to be - * installed on the reference device according to its speed class, T is - * about the maximum time needed, under BFQ and while reading two files in - * parallel, to load typical large applications on these systems. - * In practice, the slower/faster the device at hand is, the more/less it - * takes to load applications with respect to the reference device. - * Accordingly, the longer/shorter BFQ grants weight raising to interactive - * applications. + * In particular, R is the peak rate of the reference device (see + * below), and T is a reference time: given the systems that are + * likely to be installed on the reference device according to its + * speed class, T is about the maximum time needed, under BFQ and + * while reading two files in parallel, to load typical large + * applications on these systems (see the comments on + * max_service_from_wr below, for more details on how T is obtained). + * In practice, the slower/faster the device at hand is, the more/less + * it takes to load applications with respect to the reference device. + * Accordingly, the longer/shorter BFQ grants weight raising to + * interactive applications. * * BFQ uses four different reference pairs (R, T), depending on: * . whether the device is rotational or non-rotational; @@ -254,6 +256,60 @@ static int T_slow[2]; static int T_fast[2]; static int device_speed_thresh[2]; +/* + * BFQ uses the above-detailed, time-based weight-raising mechanism to + * privilege interactive tasks. This mechanism is vulnerable to the + * following false positives: I/O-bound applications that will go on + * doing I/O for much longer than the duration of weight + * raising. These applications have basically no benefit from being + * weight-raised at the beginning of their I/O. On the opposite end, + * while being weight-raised, these applications + * a) unjustly steal throughput to applications that may actually need + * low latency; + * b) make BFQ uselessly perform device idling; device idling results + * in loss of device throughput with most flash-based storage, and may + * increase latencies when used
[PATCH BUGFIX/IMPROVEMENT 0/2] block, bfq: two pending patches
Hi Jens, here are again the two pending patches you asked me to resend [1]. One of them, fixing read-starvation problems, was accompanied by a cover letter. I'm pasting the content of that cover letter below. The patch addresses (serious) starvation problems caused by request-tag exhaustion, as explained in more detail in the commit message. I started from the solution in the function kyber_limit_depth, but then I had to define more articulate limits, to counter starvation also in cases not covered in kyber_limit_depth. If this solution proves to be effective, I'm willing to port it somehow to the other schedulers. Thanks, Paolo [1] https://www.spinics.net/lists/linux-block/msg21586.html Paolo Valente (2): block, bfq: limit tags for writes and async I/O block, bfq: limit sectors served with interactive weight raising block/bfq-iosched.c | 158 +--- block/bfq-iosched.h | 17 ++ block/bfq-wf2q.c| 3 + 3 files changed, 169 insertions(+), 9 deletions(-) -- 2.15.1
Re: [PATCH v2] delayacct: Account blkio completion on the correct task
On Mon, Dec 18, 2017 at 9:45 PM, Josh Snyderwrote: > Before commit e33a9bba85a8 ("sched/core: move IO scheduling accounting from > io_schedule_timeout() into scheduler"), delayacct_blkio_end was called after > context-switching into the task which completed I/O. This resulted in double > counting: the task would account a delay both waiting for I/O and for time > spent in the runqueue. > Yes, we included the time spent in the runqueue to delays on account of I/O. > With e33a9bba85a8, delayacct_blkio_end is called by try_to_wake_up. In > ttwu, we have not yet context-switched. This is more correct, in that the > delay accounting ends when the I/O is complete. But delayacct_blkio_end > relies upon `get_current()`, and we have not yet context-switched into the > task whose I/O completed. This results in the wrong task having its delay > accounting statistics updated. > > Instead of doing that, pass the task_struct being woken to > delayacct_blkio_end, so that it can update the statistics of the correct > task. > > Fixes: e33a9bba85a8 ("sched/core: move IO scheduling accounting from > io_schedule_timeout() into scheduler") > Signed-off-by: Josh Snyder > --- Acked-by: Balbir Singh