On Thu, Nov 22, 2018 at 12:03:15PM +0100, Christoph Hellwig wrote:
> > +/* used for chunk_for_each_segment */
> > +static inline void bvec_next_segment(const struct bio_vec *bvec,
> > +struct bvec_iter_all *iter_all)
>
> FYI, chunk_for_each_segment doesn't exist any
On Thu, Nov 22, 2018 at 11:58:49AM +0100, Christoph Hellwig wrote:
> Btw, given that this is the last user of bvec_last_segment after my
> other patches I think we should kill bvec_last_segment and do something
> like this here:
>
>
> diff --git a/fs/buffer.c b/fs/buffer.c
> index fa37ad52e962..a
Ping?
- Ted
On Mon, Oct 29, 2018 at 12:15:57PM -0400, Theodore Ts'o wrote:
> Signed-off-by: Theodore Ts'o
> ---
> check | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/check b/check
> index f6c3537..ebd
On 11/22/18 4:13 AM, Jan Kara wrote:
>
> On Tue 20-11-18 10:19:53, Jens Axboe wrote:
>> +/*
>> + * We can't just wait for polled events to come to us, we have to actively
>> + * find and complete them.
>> + */
>> +static void aio_iopoll_reap_events(struct kioctx *ctx)
>> +{
>> +if (!(ctx->flag
From: Jens Axboe
[ Upstream commit de7b75d82f70c5469675b99ad632983c50b6f7e7 ]
LKP recently reported a hang at bootup in the floppy code:
[ 245.678853] INFO: task mount:580 blocked for more than 120 seconds.
[ 245.679906] Tainted: GT 4.19.0-rc6-00172-ga9f38e1 #1
[ 245.68
From: Jens Axboe
[ Upstream commit de7b75d82f70c5469675b99ad632983c50b6f7e7 ]
LKP recently reported a hang at bootup in the floppy code:
[ 245.678853] INFO: task mount:580 blocked for more than 120 seconds.
[ 245.679906] Tainted: GT 4.19.0-rc6-00172-ga9f38e1 #1
[ 245.68
From: Jens Axboe
[ Upstream commit de7b75d82f70c5469675b99ad632983c50b6f7e7 ]
LKP recently reported a hang at bootup in the floppy code:
[ 245.678853] INFO: task mount:580 blocked for more than 120 seconds.
[ 245.679906] Tainted: GT 4.19.0-rc6-00172-ga9f38e1 #1
[ 245.68
From: Hannes Reinecke
[ Upstream commit ca474b73896bf6e0c1eb8787eb217b0f80221610 ]
We need to copy the io priority, too; otherwise the clone will run
with a different priority than the original one.
Fixes: 43b62ce3ff0a ("block: move bio io prio to a new field")
Signed-off-by: Hannes Reinecke
S
Because CUTOFF_WRITEBACK is defined as 40, so before the changes of
dynamic cutoff writeback values, writeback_percent is limited to
[0, CUTOFF_WRITEBACK]. Any value larger than CUTOFF_WRITEBACK will
be fixed up to 40.
Now cutof writeback limit is a dynamic value bch_cutoff_writeback,
so the range
This patch moves MODULE_AUTHOR and MODULE_LICENSE to end of super.c, and
add MODULE_DESCRIPTION("Bcache: a Linux block layer cache").
This is preparation for adding module parameters.
Signed-off-by: Coly Li
---
drivers/md/bcache/super.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(
Currently the cutoff writeback and cutoff writeback sync thresholds are
defined by CUTOFF_WRITEBACK (40) and CUTOFF_WRITEBACK_SYNC (70) as static
values. Most of time these they work fine, but when people want to do
research on bcache writeback mode performance tuning, there is no chance
to modify
Garbage collection thread starts to work when c->sectors_to_gc is
negative value, otherwise nothing will happen even the gc thread
is woken up by wake_up_gc().
force_wake_up_gc() sets c->sectors_to_gc to -1 before calling
wake_up_gc(), then gc thread may have chance to run if no one else
sets c->s
I receive requirement to provide options to permit people to do research
on writeback performance tuning for their extreme heavy workloads. And
these options are required to be disabled by default to avoid changing
current code behavior.
This series adds several disabled-by-default options for wr
The option gc_after_writeback is disabled by default, because garbage
collection will discard SSD data which drops cached data.
Echo 1 into /sys/fs/bcache//internal/gc_after_writeback will enable
this option, which wakes up gc thread when writeback accomplished and all
cached data is clean.
This
On Thu, Nov 22, 2018 at 11:30:33AM +0100, Christoph Hellwig wrote:
> Btw, this patch instead of the plain rever might make it a little
> more clear what is going on by skipping the confusing helper altogher
> and operating on the raw bvec array:
>
>
> diff --git a/include/linux/bio.h b/include/li
The current kref implementation around pblk global caches triggers a
false positive on refcount_inc_checked() (when called) as the kref is
initialized to 0. Instead of usint kref_inc() on a 0 reference, which is
in principle correct, use kref_init() to avoid the check. This is also
more explicit ab
Matias,
Can you pick this up for 4.20? Even though it is not an error per se, it
does trigger an ugly false positive warning when CONFIG_REFCOUNT_FULL is
set.
Thanks,
Javier
Javier González (1):
lightnvm: pblk: avoid ref warning on cache creation
drivers/lightnvm/pblk-init.c | 14 +--
On Tue 20-11-18 10:19:53, Jens Axboe wrote:
> +/*
> + * We can't just wait for polled events to come to us, we have to actively
> + * find and complete them.
> + */
> +static void aio_iopoll_reap_events(struct kioctx *ctx)
> +{
> + if (!(ctx->flags & IOCTX_FLAG_IOPOLL))
> + return
> +/* used for chunk_for_each_segment */
> +static inline void bvec_next_segment(const struct bio_vec *bvec,
> + struct bvec_iter_all *iter_all)
FYI, chunk_for_each_segment doesn't exist anymore, this is
bvec_for_each_segment now. Not sure the comment helps much,
Btw, given that this is the last user of bvec_last_segment after my
other patches I think we should kill bvec_last_segment and do something
like this here:
diff --git a/fs/buffer.c b/fs/buffer.c
index fa37ad52e962..af5e135d2b83 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -2981,6 +2981,14 @@ sta
On Thu, Nov 22, 2018 at 06:46:05PM +0800, Ming Lei wrote:
> Then your patch should work by just replacing virt boundary with segment
> boudary limit. I will do that change in V12 if you don't object.
Please do, thanks a lot.
On Thu, Nov 22, 2018 at 11:41:50AM +0100, Christoph Hellwig wrote:
> On Thu, Nov 22, 2018 at 06:32:09PM +0800, Ming Lei wrote:
> > On Thu, Nov 22, 2018 at 11:04:28AM +0100, Christoph Hellwig wrote:
> > > On Thu, Nov 22, 2018 at 05:33:00PM +0800, Ming Lei wrote:
> > > > However, using virt boundary
On Thu, Nov 22, 2018 at 06:32:09PM +0800, Ming Lei wrote:
> On Thu, Nov 22, 2018 at 11:04:28AM +0100, Christoph Hellwig wrote:
> > On Thu, Nov 22, 2018 at 05:33:00PM +0800, Ming Lei wrote:
> > > However, using virt boundary limit on non-cluster seems over-kill,
> > > because the bio will be over-sp
On Thu, Nov 22, 2018 at 06:26:17PM +0800, Ming Lei wrote:
> Suppose one bio includes (pg0, 0, 512) and (pg1, 512, 512):
>
> The split is introduced by the following code in blk_bio_segment_split():
>
> if (bvprvp && bvec_gap_to_prev(q, bvprvp, bv.bv_offset))
> goto spl
On Thu, Nov 22, 2018 at 11:04:28AM +0100, Christoph Hellwig wrote:
> On Thu, Nov 22, 2018 at 05:33:00PM +0800, Ming Lei wrote:
> > However, using virt boundary limit on non-cluster seems over-kill,
> > because the bio will be over-split(each small bvec may be split as one bio)
> > if it includes lo
Btw, this patch instead of the plain rever might make it a little
more clear what is going on by skipping the confusing helper altogher
and operating on the raw bvec array:
diff --git a/include/linux/bio.h b/include/linux/bio.h
index e5b975fa0558..926550ce2d21 100644
--- a/include/linux/bio.h
+++
On Thu, Nov 22, 2018 at 11:04:28AM +0100, Christoph Hellwig wrote:
> On Thu, Nov 22, 2018 at 05:33:00PM +0800, Ming Lei wrote:
> > However, using virt boundary limit on non-cluster seems over-kill,
> > because the bio will be over-split(each small bvec may be split as one bio)
> > if it includes lo
On Thu, Nov 22, 2018 at 06:15:28PM +0800, Ming Lei wrote:
> > while (bytes) {
> > - unsigned segment_len = segment_iter_len(bv, *iter);
> > -
> > - if (max_seg_len < BVEC_MAX_LEN)
> > - segment_len = min_t(unsigned, segment_len,
> > -
On Wed, Nov 21, 2018 at 06:12:17PM +0100, Christoph Hellwig wrote:
> On Wed, Nov 21, 2018 at 05:10:25PM +0100, Christoph Hellwig wrote:
> > No - I think we can always use the code without any segment in
> > bvec_iter_advance. Because bvec_iter_advance only operates on the
> > iteractor, the genera
On Thu, Nov 22, 2018 at 05:33:00PM +0800, Ming Lei wrote:
> However, using virt boundary limit on non-cluster seems over-kill,
> because the bio will be over-split(each small bvec may be split as one bio)
> if it includes lots of small segment.
The combination of the virt boundary of PAGE_SIZE - 1
On Wed, Nov 21, 2018 at 06:46:21PM +0100, Christoph Hellwig wrote:
> Actually..
>
> I think we can kill this code entirely. If we look at what the
> clustering setting is really about it is to avoid ever merging a
> segement that spans a page boundary. And we should be able to do
> that with som
> +enum nvmet_tcp_send_state {
> + NVMET_TCP_SEND_DATA_PDU = 0,
> + NVMET_TCP_SEND_DATA,
> + NVMET_TCP_SEND_R2T,
> + NVMET_TCP_SEND_DDGST,
> + NVMET_TCP_SEND_RESPONSE
> +};
> +
> +enum nvmet_tcp_recv_state {
> + NVMET_TCP_RECV_PDU,
> + NVMET_TCP_RECV_DATA,
> + NVMET_
A few reandom nitpicks:
> +static int nvme_tcp_verify_hdgst(struct nvme_tcp_queue *queue,
> + void *pdu, size_t pdu_len)
Please use two tabs for indenting prototype continuations
> + len = le32_to_cpu(hdr->plen) - hdr->hlen -
> + ((hdr->flags & NVME_TCP_F_HDGST) ? nvme_tcp_hd
33 matches
Mail list logo