n linux-next as well? Thanks!
For the block bits:
Acked-by: Jens Axboe
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
On 9/15/23 1:13 PM, Mikulas Patocka wrote:
>
>
> On Fri, 15 Sep 2023, Mike Snitzer wrote:
>
>> On Fri, Sep 15 2023 at 12:14P -0400,
>> Jens Axboe wrote:
>>
>>> On 9/15/23 10:04 AM, Jens Axboe wrote:
>>>> Hi,
>>>>
>>>> T
kernel.org
Fixes: 563a225c9fd2 ("dm: introduce dm_{get,put}_live_table_bio called from
dm_submit_bio")
Signed-off-by: Jens Axboe
---
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index f0f118ab20fa..64a1f306c96c 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -71
On 9/15/23 12:54 PM, Mike Snitzer wrote:
> On Fri, Sep 15 2023 at 12:14P -0400,
> Jens Axboe wrote:
>
>> On 9/15/23 10:04 AM, Jens Axboe wrote:
>>> Hi,
>>>
>>> Threw some db traffic into my testing mix, and that ended in tears
>>> very quic
On 9/15/23 10:04 AM, Jens Axboe wrote:
> Hi,
>
> Threw some db traffic into my testing mix, and that ended in tears
> very quickly:
>
> CPU: 7 PID: 49609 Comm: ringbuf-read.t Tainted: GW
> 6.6.0-rc1-g39956d2dcd81 #129
> Hardware name: QEMU Standard
f (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, >flags)) ||
@@ -1851,7 +1832,7 @@ static void dm_submit_bio(struct bio *bio)
dm_split_and_process_bio(md, map, bio);
out:
- dm_put_live_table_bio(md, srcu_idx, bio_opf);
+ dm_put_live_table(md, srcu_idx);
}
static bool dm_poll_dm_i
v1?
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
com/
>>
>> I noticed that your patch series has already supported discard for brd. But
>> this patch series has not been applied to mainline at present, may I ask if
>> you still plan to continue working on it?
>>
>> --
>> Thanks,
>> Nan
>
> Hi
&
ofile::lock
commit: de9f927faf8dfb158763898e09a3e371f2ebd30d
Best regards,
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
49a8ce78ef014a71b05157a43fba8dc764e3
[30/30] fs: remove the now unused FMODE_* flags
commit: 0733ad8002916b9dbbbcfe6e92ad44d2657de1c1
Best regards,
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
_lookup_bdev as __init
commit: 2577f53f42947d8ca01666e3444bb7307319ea38
Best regards,
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
o_add_folio_nofail
commit: 42205551d1d43b1b42942fb7ef023cf954136cea
[19/20] fs: iomap: use bio_add_folio_nofail where possible
commit: f31c58ab3ddaf64503d7988197602d7443d5be37
[20/20] block: mark bio_add_folio as __must_check
commit: 9320744e4dbe10df6059b2b6531946c200a0ba3b
Best regards,
--
Jens Axboe
On 5/26/23 12:37 AM, Johannes Thumshirn wrote:
> On 24.05.23 17:02, Jens Axboe wrote:
>> On 5/2/23 4:19?AM, Johannes Thumshirn wrote:
>>> We have two functions for adding a page to a bio, __bio_add_page() which is
>>> used to add a single page to a freshly created b
as __must_check so we don't have to go again
> and audit all callers.
Looks fine to me, though it would be nice if the fs and dm people could
give this a quick look. Should not take long, any empty bio addition
should, by definition, be able to use a non-checked page addition for
the first page.
ong
>>
>> Changes to v1:
>> - Removed pointless comment pointed out by Willy
>> - Changed commit messages pointed out by Damien
>> - Colledted Damien's Reviews and Acks
>
> Jens any comments on this?
I'll take a look post -rc1.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
On 4/21/23 4:30?PM, Luis Chamberlain wrote:
> On Fri, Apr 21, 2023 at 04:24:57PM -0600, Jens Axboe wrote:
>> On 4/21/23 4:02?PM, Luis Chamberlain wrote:
>>> On Fri, Apr 21, 2023 at 09:14:00PM +0100, Matthew Wilcox wrote:
>>>> On Fri, Apr 21, 2023 at 12:58:05PM
We could just do:
>
>
> - return bioset_init(_ioend_bioset, 4 * (PAGE_SIZE / SECTOR_SIZE),
> + return bioset_init(_ioend_bioset, 4 * PAGE_SECTORS,
>
> The shift just seemed optimal if we're just going to change it.
It's going to generate the same code, but the multiplicati
On 3/24/23 4:53?PM, Mike Snitzer wrote:
> On Fri, Mar 24 2023 at 3:34P -0400,
> Jens Axboe wrote:
>
>> Just some random drive-by comments.
>>
>>> diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
>>> index 1de1bdcda1ce..a58f8ac3ba75 100644
&g
ock)
> +{
> + struct rb_node *n = root->rb_node;
> + struct dm_buffer *b;
> + struct dm_buffer *best = NULL;
> +
> + while (n) {
> + b = container_of(n, struct dm_buffer, node);
> +
> + if (b->block == block)
> +
not sure why they are ccounted like that
> but I think this behaviour is obviously wrong because user will get
> wrong disk stats.
>
> [...]
Applied, thanks!
[1/1] block: count 'ios' and 'sectors' when io is done for bio-based device
commit: 5f27571382ca42daa3e3d40d1b252bf18c2b61d2
Best
ned_request
commit: 49d24398327e32265eccdeec4baeb5a6a609c0bd
Best regards,
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
On 12/7/22 5:35?PM, Keith Busch wrote:
> On Wed, Dec 07, 2022 at 11:17:12PM +, Chaitanya Kulkarni wrote:
>> On 12/7/22 15:08, Jens Axboe wrote:
>>>
>>> My default peak testing runs at 122M IOPS. That's also the peak IOPS of
>>> the devices combined, and
On 12/7/22 3:32?PM, Gulam Mohamed wrote:
> As per the review comment from Jens Axboe, I am re-sending this patch
> against "for-6.2/block".
>
>
> Use ktime to change the granularity of IO accounting in block layer from
> milli-seconds to nano-seconds to g
my
test bench with actual IO and devices.
> BTW, I thought it's fine because it's already used for tracking io
> latency.
Reading a nsec timestamp is a LOT more expensive than reading jiffies,
which is essentially free. If you look at the amount of work that's
gone into minimizing ktime_get() for the fast path in the IO stack,
then that's a testament to that.
So that's a very bad assumption, and definitely wrong.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
the test and their
> output:
As mentioned, this will most likely have a substantial performance
impact. I'd test it, but your patch is nowhere near applying to the
current block tree. Please resend it against for-6.2/block so it can
get tested.
--
Jens Axboe
On Tue, 06 Dec 2022 15:40:57 +0100, Christoph Hellwig wrote:
> This macro is obsolete, so replace the last few uses with open coded
> bi_opf assignments.
>
>
Applied, thanks!
[1/1] block: remove bio_set_op_attrs
commit: c34b7ac65087554627f4840f4ecd6f2107a68fd1
Best regar
On 12/5/22 1:29?PM, Michael S. Tsirkin wrote:
> On Mon, Dec 05, 2022 at 11:53:51AM -0700, Jens Axboe wrote:
>> On 12/5/22 11:36?AM, Alvaro Karsz wrote:
>>> Hi,
>>>
>>>> Is this based on some spec? Because it looks pretty odd to me. There
>>>> can
for generic devices these days, if any.
>
> Yes, this is based on the virtio spec
> https://docs.oasis-open.org/virtio/virtio/v1.2/csd01/virtio-v1.2-csd01.html
> section 5.2.6
And where did this come from?
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
rmation by sending a VIRTIO_BLK_T_GET_LIFETIME command to the device.
s/VBLK_LIFETIME/VBLK_GET_LIFETIME
for the above.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
t looks pretty odd to me. There
can be a pretty wide range of two/three/etc level cells with wildly
different ranges of durability. And there's really not a lot of slc
for generic devices these days, if any.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
o blk-crypto-internal.h
commit: 3569788c08235c6f3e9e6ca724b2df44787ff487
Best regards,
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
efault_limits() private
commit: b3228254bb6e91e57f920227f72a1a7d81925d81
[4/5] dm-integrity: set dma_alignment limit in io_hints
commit: 29aa778bb66795e6a78b1c99beadc83887827868
[5/5] dm-log-writes: set dma_alignment limit in io_hints
commit: 50a893359cd2643ee1afc96eedc9e7084cab49fa
Best regards,
--
Jens
r bd_holder_dir
commit: 62f535e1f061b4c2cc76061b6b59af9f9335ee34
[09/10] block: store the holder kobject in bd_holder_disk
commit: 3b3449c1e6c3fe19f62607ff4f353f8bb82d5c4e
[10/10] block: don't allow a disk link holder to itself
commit: 077a4033541fc96fb0a955985aab7d1f353da831
Best regards
doesn't clear rq->bio and rq->__data_len for request
> with ->end_io in blk_mq_end_request_batch(), and this way is actually
> dangerous, but so far it is only for nvme passthrough request.
>
> [...]
Applied, thanks!
[1/1] blk-mq: don't add non-pt request with ->end_io to batch
On 9/30/22 1:38 PM, Bart Van Assche wrote:
> On 9/30/22 08:13, Jens Axboe wrote:
>> On 9/29/22 12:31 AM, Pankaj Raghav wrote:
>>>> Hi Jens,
>>>> ?? Please consider this patch series for the 6.1 release.
>>>>
>>>
>>> Hi Jens, Chr
te for 6.1 and I'd really like to have both Christoph
and Martin sign off on these changes.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
ot;bio->bi_iter.bi_size = len" treats it as if it were in bytes.
> The statements "sector += len << SECTOR_SHIFT" and "nr_sects -= len <<
> SECTOR_SHIFT" are thinko.
>
> [...]
Applied, thanks!
[1/1] blk-lib: fix blkdev_issue_secure_erase
com
remove blk_queue_zone_sectors
commit: de71973c2951cb2ce4b46560f021f03b15906408
[16/16] block: move zone related fields to struct gendisk
commit: d86e716aa40643e3eb8c69fab3a198146bf76dd6
Best regards,
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
On Jun 29, 2022, at 1:26 PM, Kent Overstreet wrote:
>
> On Wed, Jun 29, 2022 at 01:00:52PM -0600, Jens Axboe wrote:
>>> On 6/29/22 12:40 PM, Kent Overstreet wrote:
>>> On Wed, Jun 29, 2022 at 11:16:10AM -0600, Jens Axboe wrote:
>>>> Not sure what Christoph
On 6/29/22 12:40 PM, Kent Overstreet wrote:
> On Wed, Jun 29, 2022 at 11:16:10AM -0600, Jens Axboe wrote:
>> Not sure what Christoph change you are referring to, but all the ones
>> that I did to improve the init side were all backed by numbers I ran at
>> that time (and m
On 6/28/22 12:32 PM, Kent Overstreet wrote:
> On Tue, Jun 28, 2022 at 12:13:06PM -0600, Jens Axboe wrote:
>> It's much less about using whatever amount of memory for inflight IO,
>> and much more about not bloating fast path structures (of which the
>> bio is certainly one).
r integrity &
>>> fscrypt.
>>
>> Not mention bio_iter, bvec_iter has been 32 bytes, which is too big to
>> hold in per-io data structure. With this patch, 8bytes is enough
>> to rewind one bio if the end sector is fixed.
>
> Hold on though, does tha
g in get_max_io_size
commit: 08fdba80df1fd78a22b00e96ffd062a5bbaf8d8e
[5/6] block: fold blk_max_size_offset into get_max_io_size
commit: d8f1d38c87b87ea3a0a0c58b6386333731e29470
[6/6] block: move blk_queue_get_max_sectors to blk.h
commit: d8fca63495fb21e9b2dfcf722346aa844459139a
Best regards,
--
locations. And no, this is not
> intended as a "cheap shot" against Jens who did that either..
>
> This is what I think should fix this, and will allow us to remove
> bioset_init_from_src which was a bad idea from the start:
Based on a quick look, seems good to me.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
checking
and initialization.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
On 5/31/22 1:49 PM, Mike Snitzer wrote:
> On Tue, May 31 2022 at 3:00P -0400,
> Jens Axboe wrote:
>
>> On 5/31/22 12:58 PM, Mike Snitzer wrote:
>>> On Sun, May 29 2022 at 8:46P -0400,
>>> Jens Axboe wrote:
>>>
>>>> On 5/28/22 6:17 PM
On 5/31/22 12:58 PM, Mike Snitzer wrote:
> On Sun, May 29 2022 at 8:46P -0400,
> Jens Axboe wrote:
>
>> On 5/28/22 6:17 PM, Matthew Wilcox wrote:
>>> Not quite sure whose bug this is. Current Linus head running xfstests
>>> against ext4 (probably not ext4's
tiple times? Which it probably
should not.
The reset of bioset_exit() is resilient against this, so might be best
to include bio_alloc_cache_destroy() in that.
diff --git a/block/bio.c b/block/bio.c
index a3893d80dccc..be3937b84e68 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -722,6 +722,7 @@ static
[10/11] rnbd-srv: use bdev_discard_alignment
commit: 18292faa89d2bff3bdd33ab9c065f45fb6710e47
[11/11] xen-blkback: use bdev_discard_alignment
commit: c899b23533866910c90ef4386b501af50270d320
Best regards,
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
arity helper
commit: 7b47ef52d0a2025fd1408a8a0990933b8e1e510f
[26/27] block: decouple REQ_OP_SECURE_ERASE from REQ_OP_DISCARD
commit: 44abff2c0b970ae3d310b97617525dc01f248d7c
[27/27] direct-io: remove random prefetches
commit: c22198e78d523c8fa079bbb70b2523bb6aa51849
Best regards,
--
Jens Axboe
--
dm-
ting interface from gendisk to bdev
commit: 5f0614a55ecebdf55f1a17db0b5f6b787ed009f1
Best regards,
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
posting. We will run this on zoned stuff to check.
>
> OK, I appreciate it..
>
>> Note that patches 13 to 20 are empty...
>
> Not sure what's going on there... basically any patch that wasn't from
> me (so 1, 13-19) isn't showing up in patchwork or the dm-devel
> archive.
T
b91e1d
[5/5] pktcdvd: stop using bio_reset
commit: 852ad96cb03621f7995764b4b31cbff9801d8bcd
Best regards,
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
my test systems, which use squashfs initrd:
The series has been reverted on the block side, so next linux-next should
be fine again. We'll try again for 5.19.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
On 3/31/22 10:40 AM, Christoph Hellwig wrote:
> This should fix it:
Let's drop this one for 5.18, it's also causing a few conflicts and
would probably be more suited for 5.19 at this point.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listi
c cache by block drivers
commit: e866e4dbad251b4dd1e134c295afd862333864bc
Best regards,
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
bc477f
[5/5] pktcdvd: stop using bio_reset
commit: 1292fb59f283e76f55843d94f066c2f0b91dfb7e
Best regards,
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
On 3/30/22 8:29 AM, Christoph Hellwig wrote:
> I just noticed this didn't make it into the 5.18 queue. Which is a
> bit sad as it leaves us with a rather inconsistent bio API in 5.18.
Let me take a look, we might still be able to make it...
--
Jens Axboe
--
dm-devel mailing list
dm
es you'd like made.
Ran the usual peak testing, and it's good for about a 20% improvement
for me. 5.6M -> 6.6M IOPS on a single core, dm-linear.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
for 4k with
polling.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
On 3/8/22 6:13 PM, Ming Lei wrote:
> On Tue, Mar 08, 2022 at 06:02:50PM -0700, Jens Axboe wrote:
>> On 3/7/22 11:53 AM, Mike Snitzer wrote:
>>> From: Ming Lei
>>>
>>> Support bio(REQ_POLLED) polling in the following approach:
>>>
>>> 1) o
on a gen2 optane, it's 10x the IOPS
of what it was tested on and should help better highlight where it
makes a difference.
If either of you would like that, then send me a fool proof recipe for
what should be setup so I have a poll capable dm device.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
io typo in comment block above
> __submit_bio_noacct.
Assuming you want to take this through the dm tree:
Reviewed-by: Jens Axboe
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
6c197430e
[08/10] raid5-ppl: stop using bio_devname
commit: c7dec4623c9cde20dad8de319d177ed6aa382aaa
[09/10] ext4: stop using bio_devname
commit: 734294e47a2ec48fd25dcf2d96cdf2c6c6740c00
[10/10] block: remove bio_devname
commit: 97939610b893de068c82c347d06319cd231a4602
Best re
On 3/6/22 7:20 PM, Ming Lei wrote:
> On Sun, Mar 06, 2022 at 06:48:15PM -0700, Jens Axboe wrote:
>> On 3/6/22 2:29 AM, Christoph Hellwig wrote:
>>>> +/*
>>>> + * Reuse ->bi_end_io as hlist head for storing all dm_io instances
>>>> + * associ
be able
> to find space in the bio by creatively shifting fields around to just
> add the hlist there directly, which would remove the need for this
> override and more importantly the quite cumbersome saving and restoring
> of the end_io handler.
If it's possible, then that would be pr
(disk->queue) && disk->fops->poll_bio)
return -EINVAL;
or something like that, with a comment saying why that doesn't make any
sense.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
can wait
> another merge window to make your life easier.
Let's just use the SCSI tree - I didn't check if it throws any conflicts
right now, so probably something to check upfront...
If things pan out, you can add my Acked-by to the series.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
k_insert_cloned_request
(no commit info)
[3/5] blk-mq: remove the request_queue argument to blk_insert_cloned_request
(no commit info)
[4/5] dm: remove useless code from dm_dispatch_clone_request
(no commit info)
[5/5] dm: remove dm_dispatch_clone_request
(no commit info
4c4a695
[13/13] block: pass a block_device to bio_clone_fast
commit: abfc426d1b2fb2176df59851a64223b58ddae7e7
Best regards,
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
mber
>> of sectors is a widely followed convention:
>>
>> $ git grep -w sector_t | wc -l
>> 2575
>>
>> I would appreciate it if that convention would be used consistently, even if
>> that means modifying existing code.
>>
>> Thanks,
>>
>>
device and opf to bio_reset
commit: a7c50c940477bae89fb2b4f51bd969a2d95d7512
Best regards,
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
art_io_acct_time() to control start_time
commit: 5a6cd1d29f2104bd0306a0f839c8b328395b784f
[2/3] dm: revert partial fix for redundant bio-based IO accounting
commit: b6e31a39c63e0214937c8c586faa10122913e935
[3/3] dm: properly fix redundant bio-based IO accounting
commit: 3c4ae3478082388ae9680a932
g up code.
Looks pretty straight forward from the block core point of view. Didn't
look too closely at the fs/driver changes yet.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
On 11/2/21 8:47 AM, James Bottomley wrote:
> On Tue, 2021-11-02 at 08:41 -0600, Jens Axboe wrote:
>> On 11/2/21 8:36 AM, Jens Axboe wrote:
>>> On 11/2/21 8:33 AM, James Bottomley wrote:
>>>> On Tue, 2021-11-02 at 06:59 -0600, Jens Axboe wrote:
>>>>&
On 11/2/21 8:47 AM, James Bottomley wrote:
> On Tue, 2021-11-02 at 08:41 -0600, Jens Axboe wrote:
>> On 11/2/21 8:36 AM, Jens Axboe wrote:
>>> On 11/2/21 8:33 AM, James Bottomley wrote:
>>>> On Tue, 2021-11-02 at 06:59 -0600, Jens Axboe wrote:
>>>>&
On 11/2/21 8:36 AM, Jens Axboe wrote:
> On 11/2/21 8:33 AM, James Bottomley wrote:
>> On Tue, 2021-11-02 at 06:59 -0600, Jens Axboe wrote:
>>> On 11/1/21 7:43 PM, James Bottomley wrote:
>>>> On Thu, 2021-10-21 at 22:59 +0800, Ming Lei wrote:
>>>>>
On 11/2/21 8:33 AM, James Bottomley wrote:
> On Tue, 2021-11-02 at 06:59 -0600, Jens Axboe wrote:
>> On 11/1/21 7:43 PM, James Bottomley wrote:
>>> On Thu, 2021-10-21 at 22:59 +0800, Ming Lei wrote:
>>>> For fixing queue quiesce race between driver and block
>
request_queue);
>
> The reason to do it with atomics rather than spinlocks is
>
>1. no need to disable interrupts: atomics are locked
>2. faster because a spinlock takes an exclusive line every time but the
> read to check the value can be in shared mode in cmpxchg
e and unquiesce balanced
commit: fba9539fc2109740e70e77c303dec50d1411e11f
[3/3] dm: don't stop request queue after the dm device is suspended
commit: e719593760c34fbf346fc6e348113e042feb5f63
Best regards,
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
encryption documentation
commit: 8e9f666a6e66d3f882c094646d35536d2759103a
Best regards,
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
ca38525129b14a20117eb
[8/9] rnbd: add error handling support for add_disk()
commit: 2e9e31bea01997450397d64da43b6675e0adb9e3
[9/9] mtd: add add_disk() error handling
commit: 83b863f4a3f0de4ece7802d9121fed0c3e64145f
Best regards,
--
Jens Axboe
--
dm-devel mailing list
d
On 10/18/21 7:04 PM, Kari Argillander wrote:
> On Mon, Oct 18, 2021 at 11:53:08AM -0600, Jens Axboe wrote:
>
> snip..
>
>> diff --git a/include/linux/genhd.h b/include/linux/genhd.h
>> index 7b0326661a1e..a967b3fb3c71 100644
>> --- a/include/linux/genhd.h
On 10/18/21 11:49 AM, Christoph Hellwig wrote:
> On Mon, Oct 18, 2021 at 11:40:51AM -0600, Jens Axboe wrote:
>> static inline loff_t bdev_nr_bytes(struct block_device *bdev)
>> {
>> -return i_size_read(bdev->bd_inode);
>> +return bdev->bd_nr_se
use sb_bdev_nr_blocks
commit: ea8befeb35c47cf95012032850fe3f0ec80e5cde
Best regards,
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
On 10/18/21 11:18 AM, Christoph Hellwig wrote:
> On Mon, Oct 18, 2021 at 11:16:08AM -0600, Jens Axboe wrote:
>> This looks good to me. Followup question, as it's related - I've got a
>> hacky patch that caches the inode size in the bdev:
>>
>> https://git.kernel.dk/cgit
c4946ee4
so we don't have to dip into the inode itself for the fast path. While
it's obviously not something being proposed for inclusion right now, is
there a world in which we can make something like that work?
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat
On 9/27/21 3:59 PM, Luis Chamberlain wrote:
> We never checked for errors on add_disk() as this function
> returned void. Now that this is fixed, use the shiny new
> error handling.
Applied, thanks.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.co
On 9/27/21 3:59 PM, Luis Chamberlain wrote:
> We never checked for errors on add_disk() as this function
> returned void. Now that this is fixed, use the shiny new
> error handling.
Applied, thanks.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.co
er
> controller or in an architecture specific driver where highmem is
> impossible.
Applied, thanks.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
On 7/20/21 8:53 PM, Guoqing Jiang wrote:
> From: Guoqing Jiang
>
> Move them (PAGE_SECTORS_SHIFT, PAGE_SECTORS and SECTOR_MASK) to the
> generic header file to remove redundancy.
Applied for 5.15, thanks.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listma
; the disk and the dm directory hanging off it are not only visible once
> the initial table is loaded. This did not make a different to my testing
> using dmsetup and the lvm2 tools.
Applied, thanks.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
e that the ps3disk has a minir conflict with the
> flush_kernel_dcache_page removal in linux-next through the -mm tree.
> I had hoped that change would go into 5.14, but it seems like it is
> being held for 5.15.
Applied for 5.15, thanks.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.co
gt; in all drivers that do not have any caveats in their gendisk and
> request_queue lifetime rules.
Applied, thanks.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
lowing tests:
>>> 1) zonefs tests on top of dm-crypt with a zoned nullblk device
>>> 2) zonefs tests on top of dm-crypt+dm-linear with an SMR HDD
>>> 3) btrfs fstests on top of dm-crypt with zoned nullblk devices.
>>>
>>> Comments are as always welcome.
; for cleanup/free them when a driver is unloaded or a device is removed.
>
> Together this removes the need to treat the gendisk and request_queue
> as separate entities for bio based drivers.
Applied, thanks.
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.
mit c4a59c4e5db3 ("dm: stop using ->queuedata").
>
> So if only request_queue is given, we need to get its corresponding
> gendisk to get the private data stored in that gendisk.
Applied this one as a separate cleanup/helper.
--
Jens Axboe
--
dm-devel mailing list
dm-de
rface.
>>>>
>>>>
>>>
>>>
>>> Jeffle,
>>>
>>> I ran your above fio test on a linear LV split across 3 NVMes to
>>> second your split mapping
>>> (system: 32 core Intel, 256GiB RAM) comparing io engines sync, libaio
>>> and io_uring,
>>> the latter w/ and w/o hipri (sync+libaio obviously w/o registerfiles
>>> and hipri) which resulted ok:
>>>
>>>
>>>
>>> sync | libaio | IRQ mode (hipri=0) | iopoll (hipri=1)
>>> --|--|-|- 56.3K |
>>> 290K | 329K | 351K I can't second your
>>> drastic hipri=1 drop here...
>>
>>
>> Sorry, email mess.
>>
>>
>> sync | libaio | IRQ mode (hipri=0) | iopoll (hipri=1)
>> ---|--|-|-
>> 56.3K | 290K | 329K | 351K
>>
>>
>>
>> I can't second your drastic hipri=1 drop here...
>>
>
> Hummm, that's indeed somewhat strange...
>
> My test environment:
> - CPU: 128 cores, though only one CPU core is used since
> 'cpus_allowed=14' in fio configuration
> - memory: 983G memory free
> - NVMe: Huawai ES3510P (HWE52P434T0L005N), with 'nvme.poll_queues=3'
>
> Maybe you didn't specify 'nvme.poll_queues=XXX'? In this case, IO still
> goes into IRQ mode, even you have specified 'hipri=1'?
That would be my guess too, and the patches also have a very suspicious
clear of HIPRI which shouldn't be there (which would let that fly through).
--
Jens Axboe
--
dm-devel mailing list
dm-devel@redhat.com
https://listman.redhat.com/mailman/listinfo/dm-devel
poll for the last one.
I took a quick look, and this seems very broken. You must not poll off
the submission path, polling should be invoked by the higher layer when
someone wants to reap events. IOW, dm should not be calling blk_poll()
by itself, only off mq_ops->poll(). Your patch seems to do i
On 3/4/21 3:14 AM, Mikulas Patocka wrote:
>
>
> On Wed, 3 Mar 2021, Jens Axboe wrote:
>
>> On 3/2/21 12:05 PM, Mikulas Patocka wrote:
>>
>> There seems to be something wrong with how this series is being sent
>> out. I have 1/4 and 3/4, but both are just
1 - 100 of 329 matches
Mail list logo