The only user in your final tree seems to be the loop driver, and
even that one only uses the helper for read/write bios.
I think something like this would be much simpler in the end:
The recently submitted nvme-tcp host driver should also be a user
of this. Does it make sense to keep it as
Ming Lei writes:
> On Thu, Nov 15, 2018 at 05:59:36PM -0800, Omar Sandoval wrote:
>> On Thu, Nov 15, 2018 at 04:53:02PM +0800, Ming Lei wrote:
>> > Now multi-page bvec can cover CONFIG_THP_SWAP, so we don't need to
>> > increase BIO_MAX_PAGES for it.
>>
>> You mentioned to it in the cover letter
Hi,
Here is a new and improved version of the patch I posted on
16 November. Since the field is no longer needed, neither are
the function parameters used to allocate a bd.
---
Field bd_ops was set but never used, so I removed it, and all
code supporting it.
Signed-off-by: Bob Peterson
---
fs/g
On 20/11/18 14:37, Bob Peterson wrote:
Hi,
Here is a new and improved version of the patch I posted on
16 November. Since the field is no longer needed, neither are
the function parameters used to allocate a bd.
---
Field bd_ops was set but never used, so I removed it, and all
code supporting
Hi,
On 19/11/18 21:06, Bob Peterson wrote:
Hi Steve,
- Original Message -
On 19/11/18 13:29, Bob Peterson wrote:
This is another baby step toward a better glock state machine.
Before this patch, do_xmote was called with a gh parameter, but
only for promotes, not demotes. This patch
Hi,
On 19/11/18 21:26, Bob Peterson wrote:
Hi,
- Original Message -
On 19/11/18 13:29, Bob Peterson wrote:
This is another baby step toward a better glock state machine.
This patch eliminates a goto in function finish_xmote so we can
begin unraveling the cryptic logic with later pat
On Mon, Nov 19, 2018 at 04:49:27PM -0800, Sagi Grimberg wrote:
>
>> The only user in your final tree seems to be the loop driver, and
>> even that one only uses the helper for read/write bios.
>>
>> I think something like this would be much simpler in the end:
>
> The recently submitted nvme-tcp ho
Hi,
On 08/11/18 20:25, Bob Peterson wrote:
Hi,
This is a first draft of a two-patch set to fix some of the nasty
journal recovery problems I've found lately.
The problems have to do with file system corruption caused when recovery
replays a journal after the resource group blocks have been un
The only user in your final tree seems to be the loop driver, and
even that one only uses the helper for read/write bios.
I think something like this would be much simpler in the end:
The recently submitted nvme-tcp host driver should also be a user
of this. Does it make sense to keep it as
Accidentally, when searching for something systemd related, dlm.service
caught my eye, and surprisingly, it was rather in a HW support in Linux
SW enablement context. Briefly looking into the Ubuntu driver that
allegedly contained that file (or recipe to create it, actually), I've
realized the res
On Tue, Nov 20, 2018 at 12:11:35PM -0800, Sagi Grimberg wrote:
>
> > > > The only user in your final tree seems to be the loop driver, and
> > > > even that one only uses the helper for read/write bios.
> > > >
> > > > I think something like this would be much simpler in the end:
> > >
> > > The
Not sure I understand the 'blocking' problem in this case.
We can build a bvec table from this req, and send them all
in send(),
I would like to avoid growing bvec tables and keep everything
preallocated. Plus, a bvec_iter operates on a bvec which means
we'll need a table there as well... No
This patch introduces helpers of 'segment_iter_*' for multipage
bvec support.
The introduced helpers treate one bvec as real multi-page segment,
which may include more than one pages.
The existed helpers of bvec_iter_* are interfaces for supporting current
bvec iterator which is thought as single
Hi,
This patchset brings multi-page bvec into block layer:
1) what is multi-page bvec?
Multipage bvecs means that one 'struct bio_bvec' can hold multiple pages
which are physically contiguous instead of one single page used in linux
kernel for long time.
2) why is multi-page bvec introduced?
K
It is wrong to use bio->bi_vcnt to figure out how many segments
there are in the bio even though CLONED flag isn't set on this bio,
because this bio may be splitted or advanced.
So always use bio_segments() in blk_recount_segments(), and it shouldn't
cause any performance loss now because the phys
This helper is used for iterating over multi-page bvec for bio
split & merge code.
Reviewed-by: Omar Sandoval
Signed-off-by: Ming Lei
---
include/linux/bio.h | 25 ++---
include/linux/bvec.h | 36 +---
2 files changed, 51 insertions(+), 10 de
First it is more efficient to use bio_for_each_bvec() in both
blk_bio_segment_split() and __blk_recalc_rq_segments() to compute how
many multi-page bvecs there are in the bio.
Secondly once bio_for_each_bvec() is used, the bvec may need to be
splitted because its length can be very longer than max
It is more efficient to use bio_for_each_bvec() to map sg, meantime
we have to consider splitting multipage bvec as done in blk_bio_segment_split().
Reviewed-by: Omar Sandoval
Signed-off-by: Ming Lei
---
block/blk-merge.c | 68 +++
1 file chan
BTRFS and guard_bio_eod() need to get the last singlepage segment
from one multipage bvec, so introduce this helper to make them happy.
Reviewed-by: Omar Sandoval
Signed-off-by: Ming Lei
---
include/linux/bvec.h | 22 ++
1 file changed, 22 insertions(+)
diff --git a/include
Once multi-page bvec is enabled, the last bvec may include more than one
page, this patch use bvec_last_segment() to truncate the bio.
Reviewed-by: Omar Sandoval
Reviewed-by: Christoph Hellwig
Signed-off-by: Ming Lei
---
fs/buffer.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
di
Preparing for supporting multi-page bvec.
Reviewed-by: Omar Sandoval
Signed-off-by: Ming Lei
---
fs/btrfs/extent_io.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index d228f706ff3e..5d5965297e7e 100644
--- a/fs/btrfs/exten
BTRFS is the only user of this helper, so move this helper into
BTRFS, and implement it via bio_for_each_segment_all(), since
bio->bi_vcnt may not equal to number of pages after multipage bvec
is enabled.
Signed-off-by: Ming Lei
---
fs/btrfs/extent_io.c | 14 +-
include/linux/bio.h
bch_bio_alloc_pages() is always called on one new bio, so it is safe
to access the bvec table directly. Given it is the only kind of this
case, open code the bvec table access since bio_for_each_segment_all()
will be changed to support for iterating over multipage bvec.
Acked-by: Coly Li
Signed-o
iov_iter is implemented on bvec itererator helpers, so it is safe to pass
multi-page bvec to it, and this way is much more efficient than passing one
page in each bvec.
Signed-off-by: Ming Lei
---
drivers/block/loop.c | 20 ++--
include/linux/blkdev.h | 4
2 files changed
We will enable multi-page bvec soon, but non-cluster queue can't
handle the multi-page bvec at all. This patch borrows bounce's
idea to clone new single-page bio for non-cluster queue, and moves
its handling out of blk_bio_segment_split().
Signed-off-by: Ming Lei
---
block/Makefile | 3 ++-
We will reuse bounce_clone_bio() for cloning bio in case of
!blk_queue_cluster(q), so move this helper into bio.c and
rename it as bio_clone_bioset().
No function change.
Signed-off-by: Ming Lei
---
block/bio.c| 69 +
block/blk.h|
This patch pulls the trigger for multi-page bvecs.
Signed-off-by: Ming Lei
---
block/bio.c | 32 +++-
fs/iomap.c| 2 +-
fs/xfs/xfs_aops.c | 2 +-
3 files changed, 29 insertions(+), 7 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index 0f1635b9ec
This patch introduces one extra iterator variable to bio_for_each_segment_all(),
then we can allow bio_for_each_segment_all() to iterate over multi-page bvec.
Given it is just one mechannical & simple change on all
bio_for_each_segment_all()
users, this patch does tree-wide change in one single p
Now multi-page bvec can cover CONFIG_THP_SWAP, so we don't need to
increase BIO_MAX_PAGES for it.
CONFIG_THP_SWAP needs to split one THP into normal pages and adds
them all to one bio. With multipage-bvec, it just takes one bvec to
hold them all.
Reviewed-by: Christoph Hellwig
Signed-off-by: Min
Now multi-page bvec is supported, some helpers may return page by
page, meantime some may return segment by segment, this patch
documents the usage.
Signed-off-by: Ming Lei
---
Documentation/block/biovecs.txt | 24
1 file changed, 24 insertions(+)
diff --git a/Documenta
Since bdced438acd83ad83a6c ("block: setup bi_phys_segments after splitting"),
physical segment number is mainly figured out in blk_queue_split() for
fast path, and the flag of BIO_SEG_VALID is set there too.
Now only blk_recount_segments() and blk_recalc_rq_segments() use this
flag.
Basically blk
QUEUE_FLAG_NO_SG_MERGE has been killed, so kill BLK_MQ_F_SG_MERGE too.
Reviewed-by: Christoph Hellwig
Reviewed-by: Omar Sandoval
Signed-off-by: Ming Lei
---
block/blk-mq-debugfs.c | 1 -
drivers/block/loop.c | 2 +-
drivers/block/nbd.c | 2 +-
drivers/block/rbd.c
On Tue, Nov 20, 2018 at 07:20:45PM -0800, Sagi Grimberg wrote:
>
> > Not sure I understand the 'blocking' problem in this case.
> >
> > We can build a bvec table from this req, and send them all
> > in send(),
>
> I would like to avoid growing bvec tables and keep everything
> preallocated. Plus
I would like to avoid growing bvec tables and keep everything
preallocated. Plus, a bvec_iter operates on a bvec which means
we'll need a table there as well... Not liking it so far...
In case of bios in one request, we can't know how many bvecs there
are except for calling rq_bvecs(), so it
Yeah, that is the most common example, given merge is enabled
in most of cases. If the driver or device doesn't care merge,
you can disable it and always get single bio request, then the
bio's bvec table can be reused for send().
Does bvec_iter span bvecs with your patches? I didn't see that
On Wed, Nov 21, 2018 at 11:43:56AM +0900, Eiichi Tsukata wrote:
> Some file systems (including ext4, xfs, ramfs ...) have the following
> problem as I've described in the commit message of the 1/4 patch.
>
> The commit ef3d0fd27e90 ("vfs: do (nearly) lockless generic_file_llseek")
> removed al
On Tue, Nov 20, 2018 at 08:42:04PM -0800, Sagi Grimberg wrote:
>
> > > Yeah, that is the most common example, given merge is enabled
> > > in most of cases. If the driver or device doesn't care merge,
> > > you can disable it and always get single bio request, then the
> > > bio's bvec table can b
Wait, I see that the bvec is still a single array per bio. When you said
a table I thought you meant a 2-dimentional array...
I mean a new 1-d table A has to be created for multiple bios in one rq,
and build it in the following way
rq_for_each_bvec(tmp, rq, rq_iter)
38 matches
Mail list logo