Re: [PATCH V11 12/19] block: allow bio_for_each_segment_all() to iterate over multi-page bvec

2018-11-22 Thread Ming Lei
On Thu, Nov 22, 2018 at 12:03:15PM +0100, Christoph Hellwig wrote: > > +/* used for chunk_for_each_segment */ > > +static inline void bvec_next_segment(const struct bio_vec *bvec, > > +struct bvec_iter_all *iter_all) > > FYI, chunk_for_each_segment doesn't exist any

Re: [PATCH V11 07/19] fs/buffer.c: use bvec iterator to truncate the bio

2018-11-22 Thread Ming Lei
On Thu, Nov 22, 2018 at 11:58:49AM +0100, Christoph Hellwig wrote: > Btw, given that this is the last user of bvec_last_segment after my > other patches I think we should kill bvec_last_segment and do something > like this here: > > > diff --git a/fs/buffer.c b/fs/buffer.c > index fa37ad52e962..a

Re: [PATCH blktests] Add use of logger so that syslog files show when each test starts

2018-11-22 Thread Theodore Y. Ts'o
Ping? - Ted On Mon, Oct 29, 2018 at 12:15:57PM -0400, Theodore Ts'o wrote: > Signed-off-by: Theodore Ts'o > --- > check | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/check b/check > index f6c3537..ebd

Re: [PATCH 8/8] aio: support for IO polling

2018-11-22 Thread Jens Axboe
On 11/22/18 4:13 AM, Jan Kara wrote: > > On Tue 20-11-18 10:19:53, Jens Axboe wrote: >> +/* >> + * We can't just wait for polled events to come to us, we have to actively >> + * find and complete them. >> + */ >> +static void aio_iopoll_reap_events(struct kioctx *ctx) >> +{ >> +if (!(ctx->flag

[PATCH AUTOSEL 4.19 15/36] floppy: fix race condition in __floppy_read_block_0()

2018-11-22 Thread Sasha Levin
From: Jens Axboe [ Upstream commit de7b75d82f70c5469675b99ad632983c50b6f7e7 ] LKP recently reported a hang at bootup in the floppy code: [ 245.678853] INFO: task mount:580 blocked for more than 120 seconds. [ 245.679906] Tainted: GT 4.19.0-rc6-00172-ga9f38e1 #1 [ 245.68

[PATCH AUTOSEL 4.14 07/21] floppy: fix race condition in __floppy_read_block_0()

2018-11-22 Thread Sasha Levin
From: Jens Axboe [ Upstream commit de7b75d82f70c5469675b99ad632983c50b6f7e7 ] LKP recently reported a hang at bootup in the floppy code: [ 245.678853] INFO: task mount:580 blocked for more than 120 seconds. [ 245.679906] Tainted: GT 4.19.0-rc6-00172-ga9f38e1 #1 [ 245.68

[PATCH AUTOSEL 4.9 06/15] floppy: fix race condition in __floppy_read_block_0()

2018-11-22 Thread Sasha Levin
From: Jens Axboe [ Upstream commit de7b75d82f70c5469675b99ad632983c50b6f7e7 ] LKP recently reported a hang at bootup in the floppy code: [ 245.678853] INFO: task mount:580 blocked for more than 120 seconds. [ 245.679906] Tainted: GT 4.19.0-rc6-00172-ga9f38e1 #1 [ 245.68

[PATCH AUTOSEL 4.19 24/36] block: copy ioprio in __bio_clone_fast() and bounce

2018-11-22 Thread Sasha Levin
From: Hannes Reinecke [ Upstream commit ca474b73896bf6e0c1eb8787eb217b0f80221610 ] We need to copy the io priority, too; otherwise the clone will run with a different priority than the original one. Fixes: 43b62ce3ff0a ("block: move bio io prio to a new field") Signed-off-by: Hannes Reinecke S

[PATCH 5/5] bcache: set writeback_percent in a flexible range

2018-11-22 Thread Coly Li
Because CUTOFF_WRITEBACK is defined as 40, so before the changes of dynamic cutoff writeback values, writeback_percent is limited to [0, CUTOFF_WRITEBACK]. Any value larger than CUTOFF_WRITEBACK will be fixed up to 40. Now cutof writeback limit is a dynamic value bch_cutoff_writeback, so the range

[PATCH 3/5] bcache: add MODULE_DESCRIPTION information

2018-11-22 Thread Coly Li
This patch moves MODULE_AUTHOR and MODULE_LICENSE to end of super.c, and add MODULE_DESCRIPTION("Bcache: a Linux block layer cache"). This is preparation for adding module parameters. Signed-off-by: Coly Li --- drivers/md/bcache/super.c | 7 --- 1 file changed, 4 insertions(+), 3 deletions(

[PATCH 4/5] bcache: make cutoff_writeback and cutoff_writeback_sync tunnable

2018-11-22 Thread Coly Li
Currently the cutoff writeback and cutoff writeback sync thresholds are defined by CUTOFF_WRITEBACK (40) and CUTOFF_WRITEBACK_SYNC (70) as static values. Most of time these they work fine, but when people want to do research on bcache writeback mode performance tuning, there is no chance to modify

[PATCH 1/5] bcache: introduce force_wake_up_gc()

2018-11-22 Thread Coly Li
Garbage collection thread starts to work when c->sectors_to_gc is negative value, otherwise nothing will happen even the gc thread is woken up by wake_up_gc(). force_wake_up_gc() sets c->sectors_to_gc to -1 before calling wake_up_gc(), then gc thread may have chance to run if no one else sets c->s

[PATCH 0/5] Writeback performance tuning options

2018-11-22 Thread Coly Li
I receive requirement to provide options to permit people to do research on writeback performance tuning for their extreme heavy workloads. And these options are required to be disabled by default to avoid changing current code behavior. This series adds several disabled-by-default options for wr

[PATCH 2/5] bcache: option to automatically run gc thread after writeback accomplished

2018-11-22 Thread Coly Li
The option gc_after_writeback is disabled by default, because garbage collection will discard SSD data which drops cached data. Echo 1 into /sys/fs/bcache//internal/gc_after_writeback will enable this option, which wakes up gc thread when writeback accomplished and all cached data is clean. This

Re: [PATCH V11 03/19] block: introduce bio_for_each_bvec()

2018-11-22 Thread Ming Lei
On Thu, Nov 22, 2018 at 11:30:33AM +0100, Christoph Hellwig wrote: > Btw, this patch instead of the plain rever might make it a little > more clear what is going on by skipping the confusing helper altogher > and operating on the raw bvec array: > > > diff --git a/include/linux/bio.h b/include/li

[PATCH] lightnvm: pblk: avoid ref warning on cache creation

2018-11-22 Thread Javier González
The current kref implementation around pblk global caches triggers a false positive on refcount_inc_checked() (when called) as the kref is initialized to 0. Instead of usint kref_inc() on a 0 reference, which is in principle correct, use kref_init() to avoid the check. This is also more explicit ab

lightnvm: pblk: avoid ref warning on cache creation

2018-11-22 Thread Javier González
Matias, Can you pick this up for 4.20? Even though it is not an error per se, it does trigger an ugly false positive warning when CONFIG_REFCOUNT_FULL is set. Thanks, Javier Javier González (1): lightnvm: pblk: avoid ref warning on cache creation drivers/lightnvm/pblk-init.c | 14 +--

Re: [PATCH 8/8] aio: support for IO polling

2018-11-22 Thread Jan Kara
On Tue 20-11-18 10:19:53, Jens Axboe wrote: > +/* > + * We can't just wait for polled events to come to us, we have to actively > + * find and complete them. > + */ > +static void aio_iopoll_reap_events(struct kioctx *ctx) > +{ > + if (!(ctx->flags & IOCTX_FLAG_IOPOLL)) > + return

Re: [PATCH V11 12/19] block: allow bio_for_each_segment_all() to iterate over multi-page bvec

2018-11-22 Thread Christoph Hellwig
> +/* used for chunk_for_each_segment */ > +static inline void bvec_next_segment(const struct bio_vec *bvec, > + struct bvec_iter_all *iter_all) FYI, chunk_for_each_segment doesn't exist anymore, this is bvec_for_each_segment now. Not sure the comment helps much,

Re: [PATCH V11 07/19] fs/buffer.c: use bvec iterator to truncate the bio

2018-11-22 Thread Christoph Hellwig
Btw, given that this is the last user of bvec_last_segment after my other patches I think we should kill bvec_last_segment and do something like this here: diff --git a/fs/buffer.c b/fs/buffer.c index fa37ad52e962..af5e135d2b83 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -2981,6 +2981,14 @@ sta

Re: [PATCH V11 14/19] block: handle non-cluster bio out of blk_bio_segment_split

2018-11-22 Thread Christoph Hellwig
On Thu, Nov 22, 2018 at 06:46:05PM +0800, Ming Lei wrote: > Then your patch should work by just replacing virt boundary with segment > boudary limit. I will do that change in V12 if you don't object. Please do, thanks a lot.

Re: [PATCH V11 14/19] block: handle non-cluster bio out of blk_bio_segment_split

2018-11-22 Thread Ming Lei
On Thu, Nov 22, 2018 at 11:41:50AM +0100, Christoph Hellwig wrote: > On Thu, Nov 22, 2018 at 06:32:09PM +0800, Ming Lei wrote: > > On Thu, Nov 22, 2018 at 11:04:28AM +0100, Christoph Hellwig wrote: > > > On Thu, Nov 22, 2018 at 05:33:00PM +0800, Ming Lei wrote: > > > > However, using virt boundary

Re: [PATCH V11 14/19] block: handle non-cluster bio out of blk_bio_segment_split

2018-11-22 Thread Christoph Hellwig
On Thu, Nov 22, 2018 at 06:32:09PM +0800, Ming Lei wrote: > On Thu, Nov 22, 2018 at 11:04:28AM +0100, Christoph Hellwig wrote: > > On Thu, Nov 22, 2018 at 05:33:00PM +0800, Ming Lei wrote: > > > However, using virt boundary limit on non-cluster seems over-kill, > > > because the bio will be over-sp

Re: [PATCH V11 14/19] block: handle non-cluster bio out of blk_bio_segment_split

2018-11-22 Thread Christoph Hellwig
On Thu, Nov 22, 2018 at 06:26:17PM +0800, Ming Lei wrote: > Suppose one bio includes (pg0, 0, 512) and (pg1, 512, 512): > > The split is introduced by the following code in blk_bio_segment_split(): > > if (bvprvp && bvec_gap_to_prev(q, bvprvp, bv.bv_offset)) > goto spl

Re: [PATCH V11 14/19] block: handle non-cluster bio out of blk_bio_segment_split

2018-11-22 Thread Ming Lei
On Thu, Nov 22, 2018 at 11:04:28AM +0100, Christoph Hellwig wrote: > On Thu, Nov 22, 2018 at 05:33:00PM +0800, Ming Lei wrote: > > However, using virt boundary limit on non-cluster seems over-kill, > > because the bio will be over-split(each small bvec may be split as one bio) > > if it includes lo

Re: [PATCH V11 03/19] block: introduce bio_for_each_bvec()

2018-11-22 Thread Christoph Hellwig
Btw, this patch instead of the plain rever might make it a little more clear what is going on by skipping the confusing helper altogher and operating on the raw bvec array: diff --git a/include/linux/bio.h b/include/linux/bio.h index e5b975fa0558..926550ce2d21 100644 --- a/include/linux/bio.h +++

Re: [PATCH V11 14/19] block: handle non-cluster bio out of blk_bio_segment_split

2018-11-22 Thread Ming Lei
On Thu, Nov 22, 2018 at 11:04:28AM +0100, Christoph Hellwig wrote: > On Thu, Nov 22, 2018 at 05:33:00PM +0800, Ming Lei wrote: > > However, using virt boundary limit on non-cluster seems over-kill, > > because the bio will be over-split(each small bvec may be split as one bio) > > if it includes lo

Re: [PATCH V11 03/19] block: introduce bio_for_each_bvec()

2018-11-22 Thread Christoph Hellwig
On Thu, Nov 22, 2018 at 06:15:28PM +0800, Ming Lei wrote: > > while (bytes) { > > - unsigned segment_len = segment_iter_len(bv, *iter); > > - > > - if (max_seg_len < BVEC_MAX_LEN) > > - segment_len = min_t(unsigned, segment_len, > > -

Re: [PATCH V11 03/19] block: introduce bio_for_each_bvec()

2018-11-22 Thread Ming Lei
On Wed, Nov 21, 2018 at 06:12:17PM +0100, Christoph Hellwig wrote: > On Wed, Nov 21, 2018 at 05:10:25PM +0100, Christoph Hellwig wrote: > > No - I think we can always use the code without any segment in > > bvec_iter_advance. Because bvec_iter_advance only operates on the > > iteractor, the genera

Re: [PATCH V11 14/19] block: handle non-cluster bio out of blk_bio_segment_split

2018-11-22 Thread Christoph Hellwig
On Thu, Nov 22, 2018 at 05:33:00PM +0800, Ming Lei wrote: > However, using virt boundary limit on non-cluster seems over-kill, > because the bio will be over-split(each small bvec may be split as one bio) > if it includes lots of small segment. The combination of the virt boundary of PAGE_SIZE - 1

Re: [PATCH V11 14/19] block: handle non-cluster bio out of blk_bio_segment_split

2018-11-22 Thread Ming Lei
On Wed, Nov 21, 2018 at 06:46:21PM +0100, Christoph Hellwig wrote: > Actually.. > > I think we can kill this code entirely. If we look at what the > clustering setting is really about it is to avoid ever merging a > segement that spans a page boundary. And we should be able to do > that with som

Re: [PATCH v3 11/13] nvmet-tcp: add NVMe over TCP target driver

2018-11-22 Thread Christoph Hellwig
> +enum nvmet_tcp_send_state { > + NVMET_TCP_SEND_DATA_PDU = 0, > + NVMET_TCP_SEND_DATA, > + NVMET_TCP_SEND_R2T, > + NVMET_TCP_SEND_DDGST, > + NVMET_TCP_SEND_RESPONSE > +}; > + > +enum nvmet_tcp_recv_state { > + NVMET_TCP_RECV_PDU, > + NVMET_TCP_RECV_DATA, > + NVMET_

Re: [PATCH v3 13/13] nvme-tcp: add NVMe over TCP host driver

2018-11-22 Thread Christoph Hellwig
A few reandom nitpicks: > +static int nvme_tcp_verify_hdgst(struct nvme_tcp_queue *queue, > + void *pdu, size_t pdu_len) Please use two tabs for indenting prototype continuations > + len = le32_to_cpu(hdr->plen) - hdr->hlen - > + ((hdr->flags & NVME_TCP_F_HDGST) ? nvme_tcp_hd