On Thu, Nov 15, 2018 at 6:00 AM Andreas Gruenbacher wrote:
>
> could you please pull the following gfs2 fixes for 4.20?
No.
I'm not pulling this useless commit message:
"Merge tag 'v4.20-rc1'"
with absolutely _zero_ explanation for why that merge was done.
Guys, stop doing this. Because I
On Thu, Nov 15, 2018 at 04:52:59PM +0800, Ming Lei wrote:
> diff --git a/block/blk-zoned.c b/block/blk-zoned.c
> index 13ba2011a306..789b09ae402a 100644
> --- a/block/blk-zoned.c
> +++ b/block/blk-zoned.c
> @@ -123,6 +123,7 @@ static int blk_report_zones(struct gendisk *disk,
> sector_t sector,
>
On Thu, Nov 15, 2018 at 12:20 PM Andreas Gruenbacher
wrote:
>
> I guess rebasing the for-next branch onto something more recent to
> avoid the back-merge in the first place will be best, resulting in a
> cleaner history.
Rebases aren't really any better at all.
If you have a real *reason* for a
On Thu, Nov 15, 2018 at 09:49:17AM +0300, Vasily Averin wrote:
> Dear David,
> I've noticed that release_lockspace() lacks idr_destroy(>ls_recover_idr),
> though it is called on rollback in new_lockspace().
>
> It seems for me it is not critical, and should not lead to any leaks,
> however could
On Thu, 15 Nov 2018 at 19:23, Linus Torvalds
wrote:
>
> On Thu, Nov 15, 2018 at 12:20 PM Andreas Gruenbacher
> wrote:
> >
> > I guess rebasing the for-next branch onto something more recent to
> > avoid the back-merge in the first place will be best, resulting in a
> > cleaner history.
>
>
On Thu, Nov 15, 2018 at 04:52:48PM +0800, Ming Lei wrote:
> This patch introduces helpers of 'mp_bvec_iter_*' for multipage
> bvec support.
>
> The introduced helpers treate one bvec as real multi-page segment,
> which may include more than one pages.
>
> The existed helpers of bvec_iter_* are
On Thu, Nov 15, 2018 at 04:52:50PM +0800, Ming Lei wrote:
> First it is more efficient to use bio_for_each_bvec() in both
> blk_bio_segment_split() and __blk_recalc_rq_segments() to compute how
> many multi-page bvecs there are in the bio.
>
> Secondly once bio_for_each_bvec() is used, the bvec
On Thu, Nov 15, 2018 at 04:52:49PM +0800, Ming Lei wrote:
> This helper is used for iterating over multi-page bvec for bio
> split & merge code.
>
> Cc: Dave Chinner
> Cc: Kent Overstreet
> Cc: Mike Snitzer
> Cc: dm-de...@redhat.com
> Cc: Alexander Viro
> Cc: linux-fsde...@vger.kernel.org
>
On Thu, 15 Nov 2018 at 18:11, Linus Torvalds
wrote:
> On Thu, Nov 15, 2018 at 6:00 AM Andreas Gruenbacher
> wrote:
> >
> > could you please pull the following gfs2 fixes for 4.20?
>
> No.
>
> I'm not pulling this useless commit message:
>
> "Merge tag 'v4.20-rc1'"
>
> with absolutely _zero_
On Thu, Nov 15, 2018 at 04:52:54PM +0800, Ming Lei wrote:
> Preparing for supporting multi-page bvec.
>
> Cc: Dave Chinner
> Cc: Kent Overstreet
> Cc: Mike Snitzer
> Cc: dm-de...@redhat.com
> Cc: Alexander Viro
> Cc: linux-fsde...@vger.kernel.org
> Cc: Shaohua Li
> Cc:
On Thu, Nov 15, 2018 at 04:05:10PM -0500, Mike Snitzer wrote:
> On Thu, Nov 15 2018 at 3:20pm -0500,
> Omar Sandoval wrote:
>
> > On Thu, Nov 15, 2018 at 04:52:50PM +0800, Ming Lei wrote:
> > > First it is more efficient to use bio_for_each_bvec() in both
> > > blk_bio_segment_split() and
On Thu, Nov 15, 2018 at 04:52:52PM +0800, Ming Lei wrote:
> BTRFS and guard_bio_eod() need to get the last singlepage segment
> from one multipage bvec, so introduce this helper to make them happy.
>
> Cc: Dave Chinner
> Cc: Kent Overstreet
> Cc: Mike Snitzer
> Cc: dm-de...@redhat.com
> Cc:
On Thu, Nov 15, 2018 at 04:52:51PM +0800, Ming Lei wrote:
> It is more efficient to use bio_for_each_bvec() to map sg, meantime
> we have to consider splitting multipage bvec as done in
> blk_bio_segment_split().
>
> Cc: Dave Chinner
> Cc: Kent Overstreet
> Cc: Mike Snitzer
> Cc:
On Thu, Nov 15, 2018 at 04:52:53PM +0800, Ming Lei wrote:
> Once multi-page bvec is enabled, the last bvec may include more than one
> page, this patch use bvec_last_segment() to truncate the bio.
>
> Cc: Dave Chinner
> Cc: Kent Overstreet
> Cc: Mike Snitzer
> Cc: dm-de...@redhat.com
> Cc:
On Thu, Nov 15, 2018 at 04:52:56PM +0800, Ming Lei wrote:
> There are still cases in which we need to use bio_bvecs() for get the
> number of multi-page segment, so introduce it.
>
> Cc: Dave Chinner
> Cc: Kent Overstreet
> Cc: Mike Snitzer
> Cc: dm-de...@redhat.com
> Cc: Alexander Viro
> Cc:
On Thu, Nov 15, 2018 at 04:52:55PM +0800, Ming Lei wrote:
> BTRFS is the only user of this helper, so move this helper into
> BTRFS, and implement it via bio_for_each_segment_all(), since
> bio->bi_vcnt may not equal to number of pages after multipage bvec
> is enabled.
Shouldn't you also get rid
On Thu, Nov 15, 2018 at 04:52:57PM +0800, Ming Lei wrote:
> iov_iter is implemented with bvec itererator, so it is safe to pass
> multipage bvec to it, and this way is much more efficient than
> passing one page in each bvec.
>
> Cc: Dave Chinner
> Cc: Kent Overstreet
> Cc: Mike Snitzer
> Cc:
On Thu, Nov 15, 2018 at 04:53:02PM +0800, Ming Lei wrote:
> Now multi-page bvec can cover CONFIG_THP_SWAP, so we don't need to
> increase BIO_MAX_PAGES for it.
You mentioned to it in the cover letter, but this needs more explanation
in the commit message. Why did CONFIG_THP_SWAP require > 256?
On Thu, Nov 15, 2018 at 04:52:58PM +0800, Ming Lei wrote:
> bch_bio_alloc_pages() is always called on one new bio, so it is safe
> to access the bvec table directly. Given it is the only kind of this
> case, open code the bvec table access since bio_for_each_segment_all()
> will be changed to
On Thu, Nov 15, 2018 at 04:52:59PM +0800, Ming Lei wrote:
> This patch introduces one extra iterator variable to
> bio_for_each_segment_all(),
> then we can allow bio_for_each_segment_all() to iterate over multi-page bvec.
>
> Given it is just one mechannical & simple change on all
>
On Thu, Nov 15, 2018 at 04:53:03PM +0800, Ming Lei wrote:
> Now multi-page bvec is supported, some helpers may return page by
> page, meantime some may return segment by segment, this patch
> documents the usage.
>
> Cc: Dave Chinner
> Cc: Kent Overstreet
> Cc: Mike Snitzer
> Cc:
On Thu, Nov 15, 2018 at 04:53:00PM +0800, Ming Lei wrote:
> After multi-page is enabled, one new page may be merged to a segment
> even though it is a new added page.
>
> This patch deals with this issue by post-check in case of merge, and
> only a freshly new added page need to be dealt with for
On Thu, Nov 15, 2018 at 04:53:04PM +0800, Ming Lei wrote:
> It is wrong to use bio->bi_vcnt to figure out how many segments
> there are in the bio even though CLONED flag isn't set on this bio,
> because this bio may be splitted or advanced.
>
> So always use bio_segments() in
On Thu, Nov 15, 2018 at 04:53:05PM +0800, Ming Lei wrote:
> Since bdced438acd83ad83a6c ("block: setup bi_phys_segments after splitting"),
> physical segment number is mainly figured out in blk_queue_split() for
> fast path, and the flag of BIO_SEG_VALID is set there too.
>
> Now only
On Thu, Nov 15, 2018 at 04:53:01PM +0800, Ming Lei wrote:
> This patch pulls the trigger for multi-page bvecs.
>
> Now any request queue which supports queue cluster will see multi-page
> bvecs.
>
> Cc: Dave Chinner
> Cc: Kent Overstreet
> Cc: Mike Snitzer
> Cc: dm-de...@redhat.com
> Cc:
iov_iter is implemented with bvec itererator, so it is safe to pass
multipage bvec to it, and this way is much more efficient than
passing one page in each bvec.
Cc: Dave Chinner
Cc: Kent Overstreet
Cc: Mike Snitzer
Cc: dm-de...@redhat.com
Cc: Alexander Viro
Cc: linux-fsde...@vger.kernel.org
BTRFS is the only user of this helper, so move this helper into
BTRFS, and implement it via bio_for_each_segment_all(), since
bio->bi_vcnt may not equal to number of pages after multipage bvec
is enabled.
Cc: Dave Chinner
Cc: Kent Overstreet
Cc: Mike Snitzer
Cc: dm-de...@redhat.com
Cc:
It is more efficient to use bio_for_each_bvec() to map sg, meantime
we have to consider splitting multipage bvec as done in blk_bio_segment_split().
Cc: Dave Chinner
Cc: Kent Overstreet
Cc: Mike Snitzer
Cc: dm-de...@redhat.com
Cc: Alexander Viro
Cc: linux-fsde...@vger.kernel.org
Cc: Shaohua
This helper is used for iterating over multi-page bvec for bio
split & merge code.
Cc: Dave Chinner
Cc: Kent Overstreet
Cc: Mike Snitzer
Cc: dm-de...@redhat.com
Cc: Alexander Viro
Cc: linux-fsde...@vger.kernel.org
Cc: Shaohua Li
Cc: linux-r...@vger.kernel.org
Cc: linux-er...@lists.ozlabs.org
Hi,
This patchset brings multi-page bvec into block layer:
1) what is multi-page bvec?
Multipage bvecs means that one 'struct bio_bvec' can hold multiple pages
which are physically contiguous instead of one single page used in linux
kernel for long time.
2) why is multi-page bvec introduced?
Once multi-page bvec is enabled, the last bvec may include more than one
page, this patch use bvec_last_segment() to truncate the bio.
Cc: Dave Chinner
Cc: Kent Overstreet
Cc: Mike Snitzer
Cc: dm-de...@redhat.com
Cc: Alexander Viro
Cc: linux-fsde...@vger.kernel.org
Cc: Shaohua Li
Cc:
QUEUE_FLAG_NO_SG_MERGE has been killed, so kill BLK_MQ_F_SG_MERGE too.
Cc: Dave Chinner
Cc: Kent Overstreet
Cc: Mike Snitzer
Cc: dm-de...@redhat.com
Cc: Alexander Viro
Cc: linux-fsde...@vger.kernel.org
Cc: Shaohua Li
Cc: linux-r...@vger.kernel.org
Cc: linux-er...@lists.ozlabs.org
Cc: David
Now multi-page bvec can cover CONFIG_THP_SWAP, so we don't need to
increase BIO_MAX_PAGES for it.
Cc: Dave Chinner
Cc: Kent Overstreet
Cc: Mike Snitzer
Cc: dm-de...@redhat.com
Cc: Alexander Viro
Cc: linux-fsde...@vger.kernel.org
Cc: Shaohua Li
Cc: linux-r...@vger.kernel.org
Cc:
This patch introduces one extra iterator variable to bio_for_each_segment_all(),
then we can allow bio_for_each_segment_all() to iterate over multi-page bvec.
Given it is just one mechannical & simple change on all
bio_for_each_segment_all()
users, this patch does tree-wide change in one single
It is wrong to use bio->bi_vcnt to figure out how many segments
there are in the bio even though CLONED flag isn't set on this bio,
because this bio may be splitted or advanced.
So always use bio_segments() in blk_recount_segments(), and it shouldn't
cause any performance loss now because the
Since bdced438acd83ad83a6c ("block: setup bi_phys_segments after splitting"),
physical segment number is mainly figured out in blk_queue_split() for
fast path, and the flag of BIO_SEG_VALID is set there too.
Now only blk_recount_segments() and blk_recalc_rq_segments() use this
flag.
Basically
Now multi-page bvec is supported, some helpers may return page by
page, meantime some may return segment by segment, this patch
documents the usage.
Cc: Dave Chinner
Cc: Kent Overstreet
Cc: Mike Snitzer
Cc: dm-de...@redhat.com
Cc: Alexander Viro
Cc: linux-fsde...@vger.kernel.org
Cc: Shaohua
After multi-page is enabled, one new page may be merged to a segment
even though it is a new added page.
This patch deals with this issue by post-check in case of merge, and
only a freshly new added page need to be dealt with for iomap & xfs.
Cc: Dave Chinner
Cc: Kent Overstreet
Cc: Mike
This patch pulls the trigger for multi-page bvecs.
Now any request queue which supports queue cluster will see multi-page
bvecs.
Cc: Dave Chinner
Cc: Kent Overstreet
Cc: Mike Snitzer
Cc: dm-de...@redhat.com
Cc: Alexander Viro
Cc: linux-fsde...@vger.kernel.org
Cc: Shaohua Li
Cc:
Preparing for supporting multi-page bvec.
Cc: Dave Chinner
Cc: Kent Overstreet
Cc: Mike Snitzer
Cc: dm-de...@redhat.com
Cc: Alexander Viro
Cc: linux-fsde...@vger.kernel.org
Cc: Shaohua Li
Cc: linux-r...@vger.kernel.org
Cc: linux-er...@lists.ozlabs.org
Cc: David Sterba
Cc:
There are still cases in which we need to use bio_bvecs() for get the
number of multi-page segment, so introduce it.
Cc: Dave Chinner
Cc: Kent Overstreet
Cc: Mike Snitzer
Cc: dm-de...@redhat.com
Cc: Alexander Viro
Cc: linux-fsde...@vger.kernel.org
Cc: Shaohua Li
Cc:
BTRFS and guard_bio_eod() need to get the last singlepage segment
from one multipage bvec, so introduce this helper to make them happy.
Cc: Dave Chinner
Cc: Kent Overstreet
Cc: Mike Snitzer
Cc: dm-de...@redhat.com
Cc: Alexander Viro
Cc: linux-fsde...@vger.kernel.org
Cc: Shaohua Li
Cc:
This patch introduces helpers of 'mp_bvec_iter_*' for multipage
bvec support.
The introduced helpers treate one bvec as real multi-page segment,
which may include more than one pages.
The existed helpers of bvec_iter_* are interfaces for supporting current
bvec iterator which is thought as
If allocation fails on last elements of array need to free already
allocated elements.
Fixes 789924ba635f ("dlm: fix race between remove and lookup")
Cc: sta...@kernel.org # 3.6
Signed-off-by: Vasily Averin
---
fs/dlm/lockspace.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff
Dear David,
I've noticed that release_lockspace() lacks idr_destroy(>ls_recover_idr),
though it is called on rollback in new_lockspace().
It seems for me it is not critical, and should not lead to any leaks,
however could you please re-check it?
Thank you,
Vasily Averin
Fixes 6d40c4a708e0 ("dlm: improve error and debug messages")
Cc: sta...@kernel.org # 3.5
Signed-off-by: Vasily Averin
---
fs/dlm/lock.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
index 2cb125cc21c9..03d767b94f7b 100644
--- a/fs/dlm/lock.c
+++
Fixes 3d6aa675fff9 ("dlm: keep lkbs in idr")
Cc: sta...@kernel.org # 3.1
Signed-off-by: Vasily Averin
---
fs/dlm/lock.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
index cc91963683de..2cb125cc21c9 100644
--- a/fs/dlm/lock.c
+++ b/fs/dlm/lock.c
@@ -1209,6
According to comment in dlm_user_request() ua should be freed
in dlm_free_lkb() after successful attach to lkb.
However ua is attached to lkb not in set_lock_args() but later,
inside request_lock().
Fixes 597d0cae0f99 ("[DLM] dlm: user locks")
Cc: sta...@kernel.org # 2.6.19
Signed-off-by:
If allocation fails on last elements of array need to free already
allocated elements.
v2: just move existing out_rsbtbl label to right place
Fixes 789924ba635f ("dlm: fix race between remove and lookup")
Cc: sta...@kernel.org # 3.6
Signed-off-by: Vasily Averin
---
fs/dlm/lockspace.c | 2 +-
Hi Linus,
could you please pull the following gfs2 fixes for 4.20?
Thank you,
Andreas
The following changes since commit 651022382c7f8da46cb4872a545ee1da6d097d2a:
Linux 4.20-rc1 (2018-11-04 15:37:52 -0800)
are available in the Git repository at:
50 matches
Mail list logo