Re: [dm-devel] [PATCH RFC] dm thin: Add support for online trim to dm-thinpool

2023-10-09 Thread Joe Thornber
On Sat, Oct 7, 2023 at 2:33 AM Sarthak Kukreti wrote: > Currently, dm-thinpool only supports offline trim: there is > a userspace tool called `thin_trim` (part of `thin-provisioning-tools`), > that will look up all the unmapped regions within the thinpool > and issue discards to these regions.

Re: [dm-devel] [PATCH v7 0/5] Introduce provisioning primitives

2023-05-30 Thread Joe Thornber
On Tue, May 30, 2023 at 3:02 PM Mike Snitzer wrote: > > Also Joe, for you proposed dm-thinp design where you distinquish > between "provision" and "reserve": Would it make sense for REQ_META > (e.g. all XFS metadata) with REQ_PROVISION to be treated as an > LBA-specific hard request? Whereas

Re: [dm-devel] [PATCH v7 0/5] Introduce provisioning primitives

2023-05-30 Thread Joe Thornber
On Sat, May 27, 2023 at 12:45 AM Dave Chinner wrote: > On Fri, May 26, 2023 at 12:04:02PM +0100, Joe Thornber wrote: > > > 1) We have an api (ioctl, bio flag, whatever) that lets you > > reserve/guarantee a region: > > > > int reserve_region(dev, sector_t begin

Re: [dm-devel] [PATCH v7 0/5] Introduce provisioning primitives

2023-05-26 Thread Joe Thornber
Here's my take: I don't see why the filesystem cares if thinp is doing a reservation or provisioning under the hood. All that matters is that a future write to that region will be honoured (barring device failure etc.). I agree that the reservation/force mapped status needs to be inherited by

Re: [dm-devel] [PATCH v3 2/3] dm: Add support for block provisioning

2023-04-14 Thread Joe Thornber
On Fri, Apr 14, 2023 at 7:52 AM Sarthak Kukreti wrote: > Add support to dm devices for REQ_OP_PROVISION. The default mode > is to passthrough the request to the underlying device, if it > supports it. dm-thinpool uses the provision request to provision > blocks for a dm-thin device. dm-thinpool

Re: [dm-devel] [dm-6.4 PATCH 1/8] dm: split discards further if target sets max_discard_granularity

2023-03-23 Thread Joe Thornber
'max_discard_sectors'. > This treats 'discard_granularity' as a "min_discard_granularity" and > 'max_discard_sectors' as a "max_discard_granularity". > > Requested-by: Joe Thornber > Signed-off-by: Mike Snitzer > --- > drivers/md/dm.c | 54 +++

Re: [dm-devel] Thin pool CoW latency

2023-03-06 Thread Joe Thornber
On Sun, Mar 5, 2023 at 8:40 PM Demi Marie Obenour < d...@invisiblethingslab.com> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA512 > > Like Eric, I am very concerned about CoW latency and throughput. I > am almost certain that allocating new blocks and snapshot copy-on-write > are _the_

Re: [dm-devel] [announce] thin-provisioning-tools v1.0.0-rc1

2023-03-06 Thread Joe Thornber
Hi Eric, On Fri, Mar 3, 2023 at 9:21 PM Eric Wheeler wrote: > > It would be nice to get people testing the new improvements: > > Do you think it can make it for the 6.3 merge window that is open? > Doubtful. The bulk of the changes are in dm-bufio, which is used by a lot of targets. So I

Re: [dm-devel] [announce] thin-provisioning-tools v1.0.0-rc1

2023-03-02 Thread Joe Thornber
Hi Eric, On Wed, Mar 1, 2023 at 10:26 PM Eric Wheeler wrote: > > Hurrah! I've been looking forward to this for a long time... > > > ...So if you have any commentary on the future of dm-thin with respect > to metadata range support, or dm-thin performance in general, that I would > be very

Re: [dm-devel] [PATCH 2/2] dm-thin: Allow specifying an offset

2023-02-07 Thread Joe Thornber
Nack. I'm not building a linear target into every other target. Layering targets is simple. On Tue, Feb 7, 2023 at 7:56 AM Demi Marie Obenour < d...@invisiblethingslab.com> wrote: > This allows exposing only part of a thin volume without having to layer > dm-linear. One use-case is a

Re: [dm-devel] [PATCH 1/2] Fail I/O to thin pool devices

2023-02-07 Thread Joe Thornber
Nack. I don't see the security issue; how is this any different from running the thin tools on any incorrect device? Or even the data device that the pool is mirroring. In general the thin tools don't modify the metadata they're running on. If you know of a security issue with the thin tools

[dm-devel] [announce] thin-provisioning-tools v1.0.0-rc1

2022-12-12 Thread Joe Thornber
We're pleased to announce the first release candidate of v1.0.0 of the thin-provisioning-tools (which also contains tools for dm-cache and dm-era). Please try it out on your test systems and give us feedback. In particular regarding documentation, build and install process.

Re: [dm-devel] [PATCH -next] dm thin: Use last transaction's pmd->root when commit failed

2022-12-08 Thread Joe Thornber
Acked-by: Joe Thornber On Thu, Dec 8, 2022 at 2:07 PM Zhihao Cheng wrote: > Recently we found a softlock up problem in dm thin pool looking up btree > caused by corrupted metadata: > Kernel panic - not syncing: softlockup: hung tasks > CPU: 7 PID: 2669225 Comm: kworker/u16:3 >

Re: [dm-devel] [PATCH 4/4 v2] persistent-data: reduce lock contention while walking the btree

2022-10-13 Thread Joe Thornber
. On the other hand I don't like the size of my patch (~1200 line diff). I'll post it when it's complete and we can continue the discussion then. - Joe On Wed, Oct 12, 2022 at 7:31 AM Joe Thornber wrote: > Thanks Mikulas, > > I'll test this morning. > > - Joe > > > On Tue,

Re: [dm-devel] [PATCH 4/4 v2] persistent-data: reduce lock contention while walking the btree

2022-10-12 Thread Joe Thornber
Thanks Mikulas, I'll test this morning. - Joe On Tue, Oct 11, 2022 at 8:10 PM Mikulas Patocka wrote: > Hi > > Here I'm sending updated patch 4 that fixes hang on discard. We must not > do the optimization in dm_btree_lookup_next. > > Mikulas > > > From: Mikulas Patocka > > This patch

Re: [dm-devel] [lvm-devel] kernel BUG at drivers/md/persistent-data/dm-space-map-disk.c:178

2020-01-07 Thread Joe Thornber
On Tue, Jan 07, 2020 at 10:46:27AM +, Joe Thornber wrote: > I'll get a patch to you later today. Eric, Patch below. I've run it through a bunch of tests in the dm test suite. But obviously I have never hit your issue. Will do more testing today. - Joe Author: Joe Thornber Date:

Re: [dm-devel] [lvm-devel] kernel BUG at drivers/md/persistent-data/dm-space-map-disk.c:178

2020-01-07 Thread Joe Thornber
On Tue, Jan 07, 2020 at 10:35:46AM +, Joe Thornber wrote: > On Sat, Dec 28, 2019 at 02:13:07AM +, Eric Wheeler wrote: > > On Fri, 27 Dec 2019, Eric Wheeler wrote: > > > > Just hit the bug again without mq-scsi (scsi_mod.use_blk_mq=n), all other > >

Re: [dm-devel] dm-thin: Several Questions on dm-thin performance.

2019-12-18 Thread Joe Thornber
On Sun, Dec 15, 2019 at 09:44:49PM +, Eric Wheeler wrote: > I was looking through the dm-bio-prison-v2 commit for dm-cache (b29d4986d) > and it is huge, ~5k lines. Do you still have a git branch with these > commits in smaller pieces (not squashed) so we can find the bits that > might be

Re: [dm-devel] [PATCH 2/2] dm thin: Flush data device before committing metadata

2019-12-04 Thread Joe Thornber
On Wed, Dec 04, 2019 at 04:07:42PM +0200, Nikos Tsironis wrote: > The thin provisioning target maintains per thin device mappings that map > virtual blocks to data blocks in the data device. Ack. But I think we're issuing the FLUSH twice with your patch. Since the original bio is still

Re: [dm-devel] dm-thin: Several Questions on dm-thin performance.

2019-12-04 Thread Joe Thornber
(These notes are for my own benefit as much as anything, I haven't worked on this for a couple of years and will forget it all completely if I don't write it down somewhere). Let's start by writing some pseudocode for what the remap function for thin provisioning actually does.

Re: [dm-devel] [PATCH] dm btree: increase rebalance threshold in __rebalance2()

2019-12-03 Thread Joe Thornber
Ack. Thank you. On Tue, Dec 03, 2019 at 07:42:58PM +0800, Hou Tao wrote: > We got the following warnings from thin_check during thin-pool setup: > > $ thin_check /dev/vdb > examining superblock > examining devices tree > missing devices: [1, 84] > too few entries in btree_node:

Re: [dm-devel] dm-thin: Several Questions on dm-thin performance.

2019-12-03 Thread Joe Thornber
On Mon, Dec 02, 2019 at 10:26:00PM +, Eric Wheeler wrote: > Hi Joe, > > I'm not sure if I will have the time but thought I would start the > research and ask a few questions. I looked at the v1/v2 .h files and some > of the functions just change suffix to _v2 and maybe calling >

Re: [dm-devel] dm-thin: Several Questions on dm-thin performance.

2019-11-22 Thread Joe Thornber
On Fri, Nov 22, 2019 at 11:14:15AM +0800, JeffleXu wrote: > The first question is what's the purpose of data cell? In thin_bio_map(), > normal bio will be packed as a virtual cell and data cell. I can understand > that virtual cell is used to prevent discard bio and non-discard bio > targeting

Re: kernel BUG at drivers/md/persistent-data/dm-space-map-disk.c:178 with scsi_mod.use_blk_mq=y

2019-09-27 Thread Joe Thornber
Hi Eric, On Thu, Sep 26, 2019 at 06:27:09PM +, Eric Wheeler wrote: > I pvmoved the tmeta to an SSD logical volume (dm-linear) on a non-bcache > volume and we got the same trace this morning, so while the tdata still > passes through bcache, all meta operations are direct to an SSD. This is

Re: [dm-devel] Why does dm-thin pool metadata space map use 4K page to carry index ?

2019-09-05 Thread Joe Thornber
On Thu, Sep 05, 2019 at 02:43:28PM +0800, jianchao wang wrote: > But why does it use this 4K page instead of btree as the disk sm ? > > The brb mechanism seem be able to avoid the nested block allocation > when do COW on the metadata sm btree. > > Would anyone please help to tell why does it use

Re: [dm-devel] [PATCH v2] dm thin: Fix bug wrt FUA request completion

2019-02-15 Thread Joe Thornber
Ack. Thanks for this I was under the mistaken impression that FUA requests got split by core dm into separate payload and PREFLUSH requests. I've audited dm-cache and that looks ok. How did you test this patch? That missing bio_list_init() in V1 must have caused memory corruption? - Joe On

Re: [dm-devel] extracting thin mappings in real time

2018-10-04 Thread Joe Thornber
On Wed, Oct 03, 2018 at 04:47:41PM +0100, Thanos Makatos wrote: > poo metadata object can return this information? I've started looking > at thin_bio_map(), is this the best place to start? See thin-metadata.h - Joe -- dm-devel mailing list dm-devel@redhat.com

Re: [dm-devel] extracting thin mappings in real time

2018-10-03 Thread Joe Thornber
On Wed, Oct 03, 2018 at 03:13:36PM +0100, Thanos Makatos wrote: > > Could you say more about why you want to do this? > > > > So that I can directly read the data block without having to pass through > dm-thin, e.g. there might be a more direct datapath to the physical block > device. > >

Re: [dm-devel] extracting thin mappings in real time

2018-10-03 Thread Joe Thornber
On Wed, Oct 03, 2018 at 01:40:22PM +0100, Thanos Makatos wrote: > I have a kernel module that sits on top of a thin device mapper target that > receives block I/O requests and re-submits then to the thin target. I would > like to implement the following functionality: whenever I receive a write >

Re: [dm-devel] dm thin: data block's ref count is not zero but not belong to any device.

2018-09-27 Thread Joe Thornber
On Tue, Sep 25, 2018 at 11:13:17PM -0400, monty wrote: > > Hi! I met a problem of dm-thin: a thin-pool has no volumes but its > nr_free_blocks_data is not zero. I guess the scene of this problem like: > a. create a thin volume thin01, size is 10GB; > b. write 10GB to thin01; > c. create a

Re: [dm-devel] dm thin: superblock may write succeed before other metadata blocks because of wirting metadata in async mode.

2018-06-20 Thread Joe Thornber
On Wed, Jun 20, 2018 at 01:03:57PM -0400, monty wrote: > Hi, Mike and Joe. Thanks for your reply. I read __commit_transaction > many times and didn't find any problem of 2-phase commit. I use > md-raid1(PCIe nvme and md-raid5) in write-behind mode to store dm-thin > metadata. > Test case: > 1. I

Re: [dm-devel] dm thin: superblock may write succeed before other metadata blocks because of wirting metadata in async mode.

2018-06-19 Thread Joe Thornber
On Tue, Jun 19, 2018 at 09:11:06AM -0400, Mike Snitzer wrote: > On Mon, May 21 2018 at 8:53pm -0400, > Monty Pavel wrote: > > > > > If dm_bufio_write_dirty_buffers func is called by __commit_transaction > > func and power loss happens during executing it, coincidencely > > superblock wrote

Re: [dm-devel] dm-thin: Why is DATA_DEV_BLOCK_SIZE_MIN_SECTORS set to 64k?

2018-06-12 Thread Joe Thornber
On Sat, Jun 09, 2018 at 07:31:54PM +, Eric Wheeler wrote: > I understand the choice. What I am asking is this: would it be safe to > let others make their own choice about block size provided they are warned > about the metadata-chunk-size/pool-size limit tradeoff? > > If it is safe, can

[dm-devel] Patch [1/1] Fix bug in btree_split_beneath()

2017-12-20 Thread Joe Thornber
[dm-thin] Fix bug in btree_split_beneath() When inserting a new key/value pair into a btree we walk down the spine of btree nodes performing the following 2 operations: i) space for a new entry ii) adjusting the first key entry if the new key is lower than any in the node. If the _root_

Re: [dm-devel] [PATCH] persistent-data: fix bug about btree of updating internal node's minima key in btree_split_beneath.

2017-12-19 Thread Joe Thornber
On Mon, Dec 18, 2017 at 05:34:09PM +, Joe Thornber wrote: > Patch below. This is completely untested. I'll test tomorrow and update. The patch appears to work. I'm using this test to reproduce the problem: https://github.com/jthornber/thin-provisioning-tools/blob/master/functio

Re: [dm-devel] [PATCH] persistent-data: fix bug about btree of updating internal node's minima key in btree_split_beneath.

2017-12-18 Thread Joe Thornber
On Mon, Dec 18, 2017 at 05:13:08PM +, Joe Thornber wrote: > Hi Monty, > > On Mon, Dec 18, 2017 at 04:27:58PM -0500, monty wrote: > > Subject: [PATCH] persistent-data: fix bug about btree of updating internal > > node's minima > > key in btree_split_benea

Re: [dm-devel] [PATCH] persistent-data: fix bug about btree of updating internal node's minima key in btree_split_beneath.

2017-12-18 Thread Joe Thornber
Hi Monty, On Mon, Dec 18, 2017 at 04:27:58PM -0500, monty wrote: > Subject: [PATCH] persistent-data: fix bug about btree of updating internal > node's minima > key in btree_split_beneath. > > fix bug about btree_split_beneath func, this bug may cause a key had > been inserted to btree, but

Re: [dm-devel] Significantly dropped dm-cache performance in 4.13 compared to 4.11

2017-11-14 Thread Joe Thornber
On Mon, Nov 13, 2017 at 02:01:11PM -0500, Mike Snitzer wrote: > On Mon, Nov 13 2017 at 12:31pm -0500, > Stefan Ring <stefan...@gmail.com> wrote: > > > On Thu, Nov 9, 2017 at 4:15 PM, Stefan Ring <stefan...@gmail.com> wrote: > > > On Tue, Nov 7, 2017 at 3:41 P

Re: [dm-devel] Significantly dropped dm-cache performance in 4.13 compared to 4.11

2017-11-07 Thread Joe Thornber
On Fri, Nov 03, 2017 at 07:50:23PM +0100, Stefan Ring wrote: > It strikes me as odd that the amount read from the spinning disk is > actually more than what comes out of the combined device in the end. This suggests dm-cache is trying to promote too way too much. I'll try and reproduce the issue,

Re: [dm-devel] dm-cache coherence issue

2017-06-27 Thread Joe Thornber
On Mon, Jun 26, 2017 at 10:36:23PM +0200, Johannes Bauer wrote: > On 26.06.2017 21:56, Mike Snitzer wrote: > > >> Interesting, I did *not* change to writethrough. However, there > >> shouldn't have been any I/O on the device (it was not accessed by > >> anything after I switched to the cleaner

Re: [dm-devel] dm-cache coherence issue

2017-06-26 Thread Joe Thornber
On Mon, Jun 26, 2017 at 12:33:42PM +0100, Joe Thornber wrote: > On Sat, Jun 24, 2017 at 03:56:54PM +0200, Johannes Bauer wrote: > > So I seem to have a very basic misunderstanding of what the cleaner > > policy/dirty pages mean. Is there a way to force the cache to flush > >

Re: [dm-devel] dm-cache coherence issue

2017-06-26 Thread Joe Thornber
On Sat, Jun 24, 2017 at 03:56:54PM +0200, Johannes Bauer wrote: > So I seem to have a very basic misunderstanding of what the cleaner > policy/dirty pages mean. Is there a way to force the cache to flush > entirely? Apparently, "dmsetup wait" and/or "sync" don't do the job. Your understanding is

Re: [dm-devel] [RFC] dm-thin: Heuristic early chunk copy before COW

2017-03-09 Thread Joe Thornber
Hi Eric, On Wed, Mar 08, 2017 at 10:17:51AM -0800, Eric Wheeler wrote: > Hello all, > > For dm-thin volumes that are snapshotted often, there is a performance > penalty for writes because of COW overhead since the modified chunk needs > to be copied into a freshly allocated chunk. > > What if

Re: [dm-devel] [PATCH] thin_dump: added --device-id, --skip-mappings, and new output --format's

2016-03-19 Thread Joe Thornber
On Tue, Mar 15, 2016 at 10:59:15AM +, Thanos Makatos wrote: > On 15 March 2016 at 01:45, Eric Wheeler wrote: > > Hi Joe, > > > > Please review the patch below when you have a moment. I am interested in > > your feedback, and also interested in having this

Re: [dm-devel] [PATCH] thin_dump: added --device-id, --skip-mappings, and new output --format's

2016-03-19 Thread Joe Thornber
If you're skipping the mappings does the new thin_ls provide enough information for you? - Joe On Tue, Mar 15, 2016 at 01:45:15AM +, Eric Wheeler wrote: > Hi Joe, > > Please review the patch below when you have a moment. I am interested in > your feedback, and also interested in having

Re: [dm-devel] dm-cache: blocks don't get cached on 3.18.21-17.el6.x86_64

2016-03-14 Thread Joe Thornber
On Mon, Mar 14, 2016 at 09:54:06AM +, Thanos Makatos wrote: > (I've already reported this issue to centos and centos-devel. and > waited long enough but didn't get any reply.) > > I'm evaluating dm-cache on CentOS 6 kernels 3.18.21-17.el6.x86_64 (Xen 4) and > 2.6.32-573.7.1.el6.x86_64 (KVM).

Re: [dm-devel] [PATCH] [dm-cache] Make the mq policy an alias for smq

2016-02-11 Thread Joe Thornber
On Wed, Feb 10, 2016 at 12:06:00PM -0500, John Stoffel wrote: > > Can you add in some documentation on how you tell which dm_cache > policy is actually being used, and how to measure it, etc? It's a > black box and some info would be nice. You can get some stats on the cache performance via the

Re: [dm-devel] I/O block when removing thin device on the same pool

2016-01-29 Thread Joe Thornber
On Fri, Jan 29, 2016 at 03:50:31PM +0100, Lars Ellenberg wrote: > On Fri, Jan 22, 2016 at 04:43:46PM +0000, Joe Thornber wrote: > > On Fri, Jan 22, 2016 at 02:38:28PM +0100, Lars Ellenberg wrote: > > > We have seen lvremove of thin snapshots sometimes minutes, > > &